Dec 16 13:03:12.400693 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 13:03:12.400722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:12.400738 kernel: BIOS-provided physical RAM map: Dec 16 13:03:12.400746 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:03:12.400754 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:03:12.400762 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:03:12.400771 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:03:12.400779 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:03:12.400786 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:03:12.400795 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:03:12.400804 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:03:12.400811 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:03:12.400819 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:03:12.400827 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:03:12.400856 kernel: NX (Execute Disable) protection: active Dec 16 13:03:12.400866 kernel: APIC: Static calls initialized Dec 16 13:03:12.400874 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:03:12.400883 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 16 13:03:12.400892 kernel: random: crng init done Dec 16 13:03:12.400900 kernel: secureboot: Secure boot disabled Dec 16 13:03:12.400909 kernel: SMBIOS 3.1.0 present. Dec 16 13:03:12.400917 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:03:12.400926 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:03:12.400934 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:03:12.400944 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:03:12.400953 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:03:12.400961 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:03:12.400970 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:03:12.400979 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:12.400988 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:03:12.400996 kernel: tsc: Detected 2300.000 MHz processor Dec 16 13:03:12.401005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:03:12.401014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:03:12.401024 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:03:12.401035 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:03:12.401046 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:03:12.401055 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:03:12.401064 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:03:12.401072 kernel: Using GB pages for direct mapping Dec 16 13:03:12.401082 kernel: ACPI: Early table checksum verification disabled Dec 16 13:03:12.401095 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:03:12.401104 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401113 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401122 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:03:12.401131 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:03:12.401140 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401152 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401159 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401168 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:12.401177 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:03:12.401186 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:03:12.401195 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:03:12.401207 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:03:12.401216 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:03:12.401225 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:03:12.401234 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:03:12.401243 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:03:12.401252 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:03:12.401261 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:03:12.401272 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:03:12.401281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:03:12.401290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:03:12.401299 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:03:12.401308 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:03:12.401317 kernel: Zone ranges: Dec 16 13:03:12.401326 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:03:12.401337 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:03:12.401346 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:12.401354 kernel: Device empty Dec 16 13:03:12.401363 kernel: Movable zone start for each node Dec 16 13:03:12.401372 kernel: Early memory node ranges Dec 16 13:03:12.401381 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:03:12.401390 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:03:12.401400 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:03:12.401409 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:03:12.401418 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:03:12.401427 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:03:12.401435 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:03:12.401444 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:03:12.401453 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:03:12.401464 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:03:12.401473 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:03:12.401481 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:03:12.401490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:03:12.401499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:03:12.401508 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:03:12.401517 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:03:12.401528 kernel: TSC deadline timer available Dec 16 13:03:12.401537 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:03:12.401546 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:03:12.401554 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:03:12.401563 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:03:12.401572 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:03:12.401581 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:03:12.401589 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:03:12.401600 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:03:12.401609 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:03:12.401618 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:03:12.401627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:03:12.401636 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:03:12.401645 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:03:12.401654 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:03:12.401664 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:03:12.401674 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:03:12.401683 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:12.401692 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:03:12.401701 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:03:12.401710 kernel: Fallback order for Node 0: 0 Dec 16 13:03:12.401721 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:03:12.401730 kernel: Policy zone: Normal Dec 16 13:03:12.401739 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:03:12.401747 kernel: software IO TLB: area num 2. Dec 16 13:03:12.401756 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:03:12.401765 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:03:12.401774 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:03:12.401784 kernel: Dynamic Preempt: voluntary Dec 16 13:03:12.401793 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:03:12.401802 kernel: rcu: RCU event tracing is enabled. Dec 16 13:03:12.401818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:03:12.401830 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:03:12.401839 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:03:12.401862 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:03:12.401871 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:03:12.401891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:03:12.401900 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.401910 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.401920 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:03:12.401927 kernel: Using NULL legacy PIC Dec 16 13:03:12.401937 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:03:12.401946 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:03:12.401953 kernel: Console: colour dummy device 80x25 Dec 16 13:03:12.401966 kernel: printk: legacy console [tty1] enabled Dec 16 13:03:12.401974 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:03:12.401982 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:03:12.401991 kernel: ACPI: Core revision 20240827 Dec 16 13:03:12.402000 kernel: Failed to register legacy timer interrupt Dec 16 13:03:12.402010 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:03:12.402019 kernel: x2apic enabled Dec 16 13:03:12.402028 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:03:12.402037 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:03:12.402046 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:03:12.402056 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:03:12.402065 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:03:12.402077 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:03:12.402086 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:03:12.402095 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:03:12.402104 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:03:12.402114 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:03:12.402123 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:03:12.402132 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:12.402143 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Dec 16 13:03:12.402152 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:03:12.402161 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:03:12.402169 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:03:12.402178 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:03:12.402186 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:03:12.402195 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:03:12.402204 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:03:12.402214 kernel: RETBleed: Vulnerable Dec 16 13:03:12.402222 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:03:12.402230 kernel: active return thunk: its_return_thunk Dec 16 13:03:12.402239 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:03:12.402248 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:03:12.402257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:03:12.402267 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:03:12.402276 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:03:12.402285 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:03:12.402294 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:03:12.402305 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:03:12.402313 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:03:12.402322 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:03:12.402331 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:03:12.402340 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:03:12.402348 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:03:12.402357 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:03:12.402367 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:03:12.402376 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:03:12.402386 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:03:12.402396 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:03:12.402406 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:03:12.402415 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:03:12.402424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:03:12.402432 kernel: landlock: Up and running. Dec 16 13:03:12.402441 kernel: SELinux: Initializing. Dec 16 13:03:12.402451 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.402460 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.402470 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:03:12.402480 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:03:12.402490 kernel: signal: max sigframe size: 11952 Dec 16 13:03:12.402501 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:03:12.402511 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:03:12.402520 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:03:12.402529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:03:12.402538 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:03:12.402547 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:03:12.402557 kernel: .... node #0, CPUs: #1 Dec 16 13:03:12.402569 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:03:12.402579 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Dec 16 13:03:12.402590 kernel: Memory: 8095532K/8383228K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 281560K reserved, 0K cma-reserved) Dec 16 13:03:12.402599 kernel: devtmpfs: initialized Dec 16 13:03:12.402608 kernel: x86/mm: Memory block size: 128MB Dec 16 13:03:12.402617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:03:12.402627 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:03:12.402638 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:03:12.402649 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:03:12.402658 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:03:12.402668 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:03:12.402678 kernel: audit: type=2000 audit(1765890186.069:1): state=initialized audit_enabled=0 res=1 Dec 16 13:03:12.402687 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:03:12.402696 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:03:12.402707 kernel: cpuidle: using governor menu Dec 16 13:03:12.402717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:03:12.402726 kernel: dca service started, version 1.12.1 Dec 16 13:03:12.402735 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:03:12.402744 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:03:12.402754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:03:12.402764 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:03:12.402776 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:03:12.402786 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:03:12.402796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:03:12.402805 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:03:12.402814 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:03:12.402823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:03:12.402832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:03:12.402860 kernel: ACPI: Interpreter enabled Dec 16 13:03:12.402871 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:03:12.402881 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:03:12.402890 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:03:12.402900 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:03:12.402909 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:03:12.402918 kernel: iommu: Default domain type: Translated Dec 16 13:03:12.402928 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:03:12.402940 kernel: efivars: Registered efivars operations Dec 16 13:03:12.402950 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:03:12.402960 kernel: PCI: System does not support PCI Dec 16 13:03:12.402970 kernel: vgaarb: loaded Dec 16 13:03:12.402979 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:03:12.402989 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:03:12.402998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:03:12.403009 kernel: pnp: PnP ACPI init Dec 16 13:03:12.403019 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:03:12.403032 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:03:12.403042 kernel: NET: Registered PF_INET protocol family Dec 16 13:03:12.403052 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:03:12.403062 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:03:12.403071 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:03:12.403083 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:03:12.403092 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:03:12.403101 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:03:12.403111 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.403121 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:03:12.403132 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:03:12.403142 kernel: NET: Registered PF_XDP protocol family Dec 16 13:03:12.403152 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:03:12.403162 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:03:12.403171 kernel: software IO TLB: mapped [mem 0x000000003a9ba000-0x000000003e9ba000] (64MB) Dec 16 13:03:12.403180 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:03:12.403190 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:03:12.403201 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 13:03:12.403212 kernel: clocksource: Switched to clocksource tsc Dec 16 13:03:12.403223 kernel: Initialise system trusted keyrings Dec 16 13:03:12.403233 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:03:12.403242 kernel: Key type asymmetric registered Dec 16 13:03:12.403251 kernel: Asymmetric key parser 'x509' registered Dec 16 13:03:12.403261 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:03:12.403271 kernel: io scheduler mq-deadline registered Dec 16 13:03:12.403281 kernel: io scheduler kyber registered Dec 16 13:03:12.403293 kernel: io scheduler bfq registered Dec 16 13:03:12.403303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:03:12.403313 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:03:12.403322 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:12.403331 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:03:12.403341 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:12.403350 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:03:12.403534 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:03:12.403645 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:03:08 UTC (1765890188) Dec 16 13:03:12.403748 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:03:12.403760 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:03:12.403770 kernel: efifb: probing for efifb Dec 16 13:03:12.403779 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:03:12.403791 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:03:12.403801 kernel: efifb: scrolling: redraw Dec 16 13:03:12.403811 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:03:12.403821 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:03:12.403831 kernel: fb0: EFI VGA frame buffer device Dec 16 13:03:12.403851 kernel: pstore: Using crash dump compression: deflate Dec 16 13:03:12.403862 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:03:12.403873 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:03:12.403882 kernel: Segment Routing with IPv6 Dec 16 13:03:12.403892 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:03:12.403901 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:03:12.403911 kernel: Key type dns_resolver registered Dec 16 13:03:12.403921 kernel: IPI shorthand broadcast: enabled Dec 16 13:03:12.403931 kernel: sched_clock: Marking stable (2080004377, 97512453)->(2550011539, -372494709) Dec 16 13:03:12.403941 kernel: registered taskstats version 1 Dec 16 13:03:12.403952 kernel: Loading compiled-in X.509 certificates Dec 16 13:03:12.403962 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 13:03:12.403971 kernel: Demotion targets for Node 0: null Dec 16 13:03:12.403980 kernel: Key type .fscrypt registered Dec 16 13:03:12.403990 kernel: Key type fscrypt-provisioning registered Dec 16 13:03:12.404000 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:03:12.404010 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:03:12.404022 kernel: ima: No architecture policies found Dec 16 13:03:12.404032 kernel: clk: Disabling unused clocks Dec 16 13:03:12.404042 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 13:03:12.404051 kernel: Write protecting the kernel read-only data: 45056k Dec 16 13:03:12.404060 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 13:03:12.404070 kernel: Run /init as init process Dec 16 13:03:12.404079 kernel: with arguments: Dec 16 13:03:12.404091 kernel: /init Dec 16 13:03:12.404101 kernel: with environment: Dec 16 13:03:12.404111 kernel: HOME=/ Dec 16 13:03:12.404120 kernel: TERM=linux Dec 16 13:03:12.404130 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:03:12.404141 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:03:12.404152 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:03:12.404166 kernel: PTP clock support registered Dec 16 13:03:12.404177 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:03:12.404188 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:03:12.404198 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:03:12.404208 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:03:12.404217 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:03:12.404228 kernel: SCSI subsystem initialized Dec 16 13:03:12.404239 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:03:12.404417 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:12.404563 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:03:12.404714 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:03:12.404830 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:12.404999 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:03:12.405125 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:03:12.405241 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:12.405364 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:03:12.405376 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:03:12.405505 kernel: scsi host0: storvsc_host_t Dec 16 13:03:12.405665 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:03:12.405677 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:03:12.405687 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:03:12.405698 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 13:03:12.405819 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:03:12.405833 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:03:12.407016 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 13:03:12.407177 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:03:12.407329 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:12.407429 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:03:12.407442 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:03:12.407583 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:03:12.407597 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:03:12.407722 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:03:12.407734 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:03:12.407744 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:03:12.407755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:03:12.407766 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:03:12.407790 kernel: raid6: avx512x4 gen() 43497 MB/s Dec 16 13:03:12.407802 kernel: raid6: avx512x2 gen() 42842 MB/s Dec 16 13:03:12.407811 kernel: raid6: avx512x1 gen() 25079 MB/s Dec 16 13:03:12.407821 kernel: raid6: avx2x4 gen() 34939 MB/s Dec 16 13:03:12.407830 kernel: raid6: avx2x2 gen() 36621 MB/s Dec 16 13:03:12.407862 kernel: raid6: avx2x1 gen() 29989 MB/s Dec 16 13:03:12.407873 kernel: raid6: using algorithm avx512x4 gen() 43497 MB/s Dec 16 13:03:12.407885 kernel: raid6: .... xor() 7202 MB/s, rmw enabled Dec 16 13:03:12.407895 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:03:12.407905 kernel: xor: automatically using best checksumming function avx Dec 16 13:03:12.407915 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:03:12.407925 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (899) Dec 16 13:03:12.407938 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 13:03:12.407949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:12.407961 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:03:12.407971 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:03:12.407980 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:03:12.407990 kernel: loop: module loaded Dec 16 13:03:12.408000 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 13:03:12.408011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:03:12.408023 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:03:12.408040 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:12.408051 systemd[1]: Detected virtualization microsoft. Dec 16 13:03:12.408061 systemd[1]: Detected architecture x86-64. Dec 16 13:03:12.408071 systemd[1]: Running in initrd. Dec 16 13:03:12.408089 systemd[1]: No hostname configured, using default hostname. Dec 16 13:03:12.408100 systemd[1]: Hostname set to . Dec 16 13:03:12.408113 systemd[1]: Initializing machine ID from random generator. Dec 16 13:03:12.408124 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:03:12.408135 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:12.408146 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:12.408156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:12.408168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:03:12.408180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:12.408191 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:03:12.408203 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:03:12.408216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:12.408228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:12.408237 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:12.408248 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:12.408258 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:12.408268 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:12.408279 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:12.408291 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:12.408302 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:12.408314 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:12.408325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:03:12.408335 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:03:12.408345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:12.408356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:12.408368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:12.408379 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:12.408391 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:03:12.408403 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:03:12.408413 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:12.408423 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:03:12.408434 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:03:12.408446 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:03:12.408456 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:12.408466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:12.408478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:12.408491 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:12.408502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:12.408514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:12.408524 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:03:12.408553 systemd-journald[1033]: Collecting audit messages is enabled. Dec 16 13:03:12.408582 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:12.408594 systemd-journald[1033]: Journal started Dec 16 13:03:12.408619 systemd-journald[1033]: Runtime Journal (/run/log/journal/bb242a51358c4fafaa41cb55d49c7e7f) is 8M, max 158.5M, 150.5M free. Dec 16 13:03:12.414859 kernel: audit: type=1130 audit(1765890192.408:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.414905 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:12.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.421878 kernel: audit: type=1130 audit(1765890192.415:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.422632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:12.427964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:12.445857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:03:12.447465 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:03:12.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.449706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:12.454878 kernel: audit: type=1130 audit(1765890192.449:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.471089 kernel: Bridge firewalling registered Dec 16 13:03:12.470555 systemd-modules-load[1037]: Inserted module 'br_netfilter' Dec 16 13:03:12.472605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:12.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.478085 kernel: audit: type=1130 audit(1765890192.472:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.476900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:12.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.482105 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:12.482906 kernel: audit: type=1130 audit(1765890192.476:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.497043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:12.505463 kernel: audit: type=1130 audit(1765890192.497:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.499112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:12.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.510874 kernel: audit: type=1130 audit(1765890192.505:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.511008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:03:12.515000 audit: BPF prog-id=6 op=LOAD Dec 16 13:03:12.521181 kernel: audit: type=1334 audit(1765890192.515:9): prog-id=6 op=LOAD Dec 16 13:03:12.518973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:12.541856 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:12.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.554870 kernel: audit: type=1130 audit(1765890192.543:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.616023 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:03:12.676777 systemd-resolved[1065]: Positive Trust Anchors: Dec 16 13:03:12.678357 systemd-resolved[1065]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:12.678416 systemd-resolved[1065]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:03:12.678455 systemd-resolved[1065]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:12.720813 dracut-cmdline[1077]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 13:03:12.734196 systemd-resolved[1065]: Defaulting to hostname 'linux'. Dec 16 13:03:12.737337 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:12.741928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:12.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.753550 kernel: audit: type=1130 audit(1765890192.740:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:12.820868 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:03:12.886875 kernel: iscsi: registered transport (tcp) Dec 16 13:03:12.942296 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:03:12.942369 kernel: QLogic iSCSI HBA Driver Dec 16 13:03:13.005227 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:13.020655 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:13.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.025264 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:13.058661 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:13.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.061476 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:03:13.067027 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:03:13.096171 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:13.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.100000 audit: BPF prog-id=7 op=LOAD Dec 16 13:03:13.100000 audit: BPF prog-id=8 op=LOAD Dec 16 13:03:13.102944 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:13.132220 systemd-udevd[1321]: Using default interface naming scheme 'v257'. Dec 16 13:03:13.144795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:13.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.151797 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:03:13.175864 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:13.181911 dracut-pre-trigger[1388]: rd.md=0: removing MD RAID activation Dec 16 13:03:13.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.184000 audit: BPF prog-id=9 op=LOAD Dec 16 13:03:13.186034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:13.206587 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:13.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.214187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:13.239493 systemd-networkd[1429]: lo: Link UP Dec 16 13:03:13.239504 systemd-networkd[1429]: lo: Gained carrier Dec 16 13:03:13.240149 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:13.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.247987 systemd[1]: Reached target network.target - Network. Dec 16 13:03:13.269192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:13.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.276389 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:03:13.384221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:13.384357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:13.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.390023 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:13.416864 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:03:13.436874 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:03:13.447914 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd123eeb (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:03:13.450477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:13.474277 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:13.482732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#178 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:03:13.482954 kernel: AES CTR mode by8 optimization enabled Dec 16 13:03:13.474286 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:13.475048 systemd-networkd[1429]: eth0: Link UP Dec 16 13:03:13.475182 systemd-networkd[1429]: eth0: Gained carrier Dec 16 13:03:13.475202 systemd-networkd[1429]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:13.499955 systemd-networkd[1429]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:13.525331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:13.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.627868 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:03:13.734895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:03:13.739954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:03:13.846572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:03:13.872185 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:03:13.906497 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:03:13.979088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:13.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:13.982570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:13.986132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:13.990192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:13.998460 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:03:14.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:14.027998 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:14.477094 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:03:14.477379 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:03:14.479972 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:03:14.481522 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:03:14.487124 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:03:14.491883 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:03:14.497919 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:03:14.497990 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:03:14.520189 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:03:14.520426 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:03:14.524947 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:03:14.549760 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:03:14.559863 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:03:14.561858 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd123eeb eth0: VF registering: eth1 Dec 16 13:03:14.562040 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:03:14.569687 systemd-networkd[1429]: eth1: Interface name change detected, renamed to enP30832s1. Dec 16 13:03:14.572956 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:03:14.668878 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:03:14.672813 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:14.673089 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd123eeb eth0: Data path switched to VF: enP30832s1 Dec 16 13:03:14.673414 systemd-networkd[1429]: enP30832s1: Link UP Dec 16 13:03:14.673624 systemd-networkd[1429]: enP30832s1: Gained carrier Dec 16 13:03:15.028460 disk-uuid[1605]: Warning: The kernel is still using the old partition table. Dec 16 13:03:15.028460 disk-uuid[1605]: The new table will be used at the next reboot or after you Dec 16 13:03:15.028460 disk-uuid[1605]: run partprobe(8) or kpartx(8) Dec 16 13:03:15.028460 disk-uuid[1605]: The operation has completed successfully. Dec 16 13:03:15.040669 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:03:15.040783 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:03:15.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.045908 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:03:15.095860 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1651) Dec 16 13:03:15.098684 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:15.098723 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:15.121217 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:15.121275 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:15.122332 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:15.128860 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:15.129568 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:03:15.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:15.133149 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:03:15.218038 systemd-networkd[1429]: eth0: Gained IPv6LL Dec 16 13:03:16.559831 ignition[1670]: Ignition 2.22.0 Dec 16 13:03:16.559863 ignition[1670]: Stage: fetch-offline Dec 16 13:03:16.562033 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:16.560001 ignition[1670]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.560017 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.560105 ignition[1670]: parsed url from cmdline: "" Dec 16 13:03:16.569682 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:03:16.560108 ignition[1670]: no config URL provided Dec 16 13:03:16.560113 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:16.560122 ignition[1670]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:16.560127 ignition[1670]: failed to fetch config: resource requires networking Dec 16 13:03:16.560328 ignition[1670]: Ignition finished successfully Dec 16 13:03:16.597869 ignition[1676]: Ignition 2.22.0 Dec 16 13:03:16.597880 ignition[1676]: Stage: fetch Dec 16 13:03:16.598099 ignition[1676]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.598106 ignition[1676]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.598180 ignition[1676]: parsed url from cmdline: "" Dec 16 13:03:16.598183 ignition[1676]: no config URL provided Dec 16 13:03:16.598187 ignition[1676]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:16.598193 ignition[1676]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:16.598212 ignition[1676]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:03:16.677979 ignition[1676]: GET result: OK Dec 16 13:03:16.678068 ignition[1676]: config has been read from IMDS userdata Dec 16 13:03:16.678098 ignition[1676]: parsing config with SHA512: 9133a63cb8060cd4ac889b11750d782eb7ce916a5e6cf0f75101eb7e5207a471608061e5f386e8a5f5dd7e4955f3b844c297338785af10a42576f29d5db59f44 Dec 16 13:03:16.682164 unknown[1676]: fetched base config from "system" Dec 16 13:03:16.682174 unknown[1676]: fetched base config from "system" Dec 16 13:03:16.682562 ignition[1676]: fetch: fetch complete Dec 16 13:03:16.682179 unknown[1676]: fetched user config from "azure" Dec 16 13:03:16.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.682567 ignition[1676]: fetch: fetch passed Dec 16 13:03:16.686442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:03:16.682611 ignition[1676]: Ignition finished successfully Dec 16 13:03:16.692172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:03:16.719137 ignition[1683]: Ignition 2.22.0 Dec 16 13:03:16.719149 ignition[1683]: Stage: kargs Dec 16 13:03:16.719391 ignition[1683]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.722641 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:03:16.719399 ignition[1683]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.726161 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:03:16.720553 ignition[1683]: kargs: kargs passed Dec 16 13:03:16.720593 ignition[1683]: Ignition finished successfully Dec 16 13:03:16.757103 ignition[1690]: Ignition 2.22.0 Dec 16 13:03:16.757113 ignition[1690]: Stage: disks Dec 16 13:03:16.757329 ignition[1690]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:16.760131 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:03:16.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.757337 ignition[1690]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:16.764867 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:16.758236 ignition[1690]: disks: disks passed Dec 16 13:03:16.770330 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:03:16.758275 ignition[1690]: Ignition finished successfully Dec 16 13:03:16.775483 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:16.778901 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:16.779301 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:16.780162 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:03:16.913774 systemd-fsck[1699]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 13:03:16.919119 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:03:16.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:16.923371 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:03:17.339861 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 13:03:17.340041 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:03:17.342439 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:17.382819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:17.386582 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:03:17.392247 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:03:17.396548 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:03:17.396600 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:17.404472 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:03:17.410983 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:03:17.416884 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1708) Dec 16 13:03:17.419904 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:17.420000 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:17.425575 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:17.425618 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:17.427239 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:17.428282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:18.026276 coreos-metadata[1710]: Dec 16 13:03:18.025 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:03:18.030524 coreos-metadata[1710]: Dec 16 13:03:18.030 INFO Fetch successful Dec 16 13:03:18.032914 coreos-metadata[1710]: Dec 16 13:03:18.030 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:03:18.039004 coreos-metadata[1710]: Dec 16 13:03:18.038 INFO Fetch successful Dec 16 13:03:18.041952 coreos-metadata[1710]: Dec 16 13:03:18.040 INFO wrote hostname ci-4515.1.0-a-968fde264e to /sysroot/etc/hostname Dec 16 13:03:18.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:18.041707 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:03:18.052229 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:03:18.052248 kernel: audit: type=1130 audit(1765890198.044:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:18.205686 initrd-setup-root[1738]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:03:18.243226 initrd-setup-root[1745]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:03:18.281763 initrd-setup-root[1752]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:03:18.303627 initrd-setup-root[1759]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:03:19.039754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:19.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.046348 kernel: audit: type=1130 audit(1765890199.039:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.045950 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:03:19.048078 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:03:19.084397 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:19.084215 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:03:19.100971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:03:19.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.107867 kernel: audit: type=1130 audit(1765890199.101:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.115113 ignition[1827]: INFO : Ignition 2.22.0 Dec 16 13:03:19.115113 ignition[1827]: INFO : Stage: mount Dec 16 13:03:19.119928 ignition[1827]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:19.119928 ignition[1827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:19.119928 ignition[1827]: INFO : mount: mount passed Dec 16 13:03:19.119928 ignition[1827]: INFO : Ignition finished successfully Dec 16 13:03:19.135071 kernel: audit: type=1130 audit(1765890199.122:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:19.118779 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:03:19.125364 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:03:19.151323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:19.171860 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1839) Dec 16 13:03:19.172058 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 13:03:19.173952 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:19.179623 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:03:19.179659 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:03:19.181107 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:03:19.182838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:19.204548 ignition[1856]: INFO : Ignition 2.22.0 Dec 16 13:03:19.204548 ignition[1856]: INFO : Stage: files Dec 16 13:03:19.209917 ignition[1856]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:19.209917 ignition[1856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:19.209917 ignition[1856]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:03:19.209917 ignition[1856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:03:19.209917 ignition[1856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:03:19.250104 ignition[1856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:03:19.253940 ignition[1856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:03:19.253940 ignition[1856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:03:19.251351 unknown[1856]: wrote ssh authorized keys file for user: core Dec 16 13:03:19.306368 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:03:19.309085 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:03:19.341598 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:03:19.429118 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:19.432937 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:19.456883 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:03:19.870447 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:03:20.093903 ignition[1856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:20.093903 ignition[1856]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:03:20.145789 ignition[1856]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:20.158352 ignition[1856]: INFO : files: files passed Dec 16 13:03:20.158352 ignition[1856]: INFO : Ignition finished successfully Dec 16 13:03:20.173212 kernel: audit: type=1130 audit(1765890200.164:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.160675 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:03:20.168999 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:03:20.179985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:03:20.195591 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:03:20.197948 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:03:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.210963 initrd-setup-root-after-ignition[1888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.218932 kernel: audit: type=1130 audit(1765890200.202:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.218963 kernel: audit: type=1131 audit(1765890200.202:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.217516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:20.222201 initrd-setup-root-after-ignition[1888]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.231400 kernel: audit: type=1130 audit(1765890200.222:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.231491 initrd-setup-root-after-ignition[1892]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:20.224128 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:03:20.235334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:03:20.280298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:03:20.280396 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:03:20.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.287764 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:03:20.292166 kernel: audit: type=1130 audit(1765890200.282:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.292186 kernel: audit: type=1131 audit(1765890200.282:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.293933 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:03:20.298220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:03:20.298897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:03:20.325970 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:20.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.328985 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:03:20.346912 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:20.347133 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:20.350111 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:20.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.350380 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:03:20.350869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:03:20.351006 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:20.357277 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:03:20.361008 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:03:20.364999 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:03:20.369000 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:20.373989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:20.376850 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:20.380960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:03:20.383725 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:20.389006 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:03:20.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.393000 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:03:20.396990 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:03:20.399308 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:03:20.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.399447 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:20.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.404206 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:20.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.406671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:20.408021 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:03:20.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.408476 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:20.411599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:03:20.411727 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:20.414702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:03:20.414833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:20.417674 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:03:20.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.456324 ignition[1912]: INFO : Ignition 2.22.0 Dec 16 13:03:20.456324 ignition[1912]: INFO : Stage: umount Dec 16 13:03:20.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.417792 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:03:20.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.467054 ignition[1912]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:20.467054 ignition[1912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:03:20.467054 ignition[1912]: INFO : umount: umount passed Dec 16 13:03:20.467054 ignition[1912]: INFO : Ignition finished successfully Dec 16 13:03:20.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.420620 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:03:20.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.420743 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:03:20.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.425059 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:03:20.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.427962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:03:20.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.428140 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:20.437562 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:03:20.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.446269 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:03:20.446455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:20.452387 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:03:20.452520 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:20.461051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:03:20.461160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:20.468682 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:03:20.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.468775 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:03:20.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.480803 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:03:20.480918 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:03:20.483394 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:03:20.483494 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:03:20.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.486933 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:03:20.556000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:03:20.486981 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:03:20.490964 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:03:20.491006 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:03:20.494924 systemd[1]: Stopped target network.target - Network. Dec 16 13:03:20.498896 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:03:20.498947 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:20.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.569000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:03:20.503924 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:03:20.506209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:03:20.506805 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:20.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.510098 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:03:20.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.516907 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:03:20.521939 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:03:20.521977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:20.524919 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:03:20.524955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:20.527945 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:03:20.527972 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:20.528474 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:03:20.528520 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:03:20.533934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:03:20.533980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:20.538005 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:03:20.541012 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:03:20.545011 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:03:20.551417 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:03:20.551736 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:03:20.560936 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:03:20.561040 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:03:20.566539 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:03:20.568140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:03:20.568191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:20.574937 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:03:20.578890 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:03:20.578959 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:20.582952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:03:20.583001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:20.586743 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:03:20.586783 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:20.590413 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:20.633208 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:03:20.634418 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:20.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.639550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:03:20.640926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:20.645951 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:03:20.645985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:20.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.650907 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:03:20.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.650956 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:20.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.656219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:03:20.656270 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:20.658498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:03:20.658543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:20.664045 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:03:20.674893 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:03:20.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.675400 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:20.679365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:03:20.679421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:20.679582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:20.679615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:20.701032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:03:20.702455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:03:20.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.728155 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd123eeb eth0: Data path switched from VF: enP30832s1 Dec 16 13:03:20.728473 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:20.730217 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:03:20.731344 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:03:20.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.882962 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:03:20.883074 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:03:20.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.887274 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:03:20.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:20.889659 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:03:20.889720 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:20.894876 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:03:20.914155 systemd[1]: Switching root. Dec 16 13:03:21.001332 systemd-journald[1033]: Journal stopped Dec 16 13:03:25.030015 systemd-journald[1033]: Received SIGTERM from PID 1 (systemd). Dec 16 13:03:25.030055 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:03:25.030075 kernel: SELinux: policy capability open_perms=1 Dec 16 13:03:25.030086 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:03:25.030097 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:03:25.030108 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:03:25.030120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:03:25.030135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:03:25.030146 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:03:25.030157 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:03:25.030168 systemd[1]: Successfully loaded SELinux policy in 169.327ms. Dec 16 13:03:25.030180 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.506ms. Dec 16 13:03:25.030195 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:25.030213 systemd[1]: Detected virtualization microsoft. Dec 16 13:03:25.030227 systemd[1]: Detected architecture x86-64. Dec 16 13:03:25.030238 systemd[1]: Detected first boot. Dec 16 13:03:25.030252 systemd[1]: Hostname set to . Dec 16 13:03:25.030267 systemd[1]: Initializing machine ID from random generator. Dec 16 13:03:25.030279 zram_generator::config[1956]: No configuration found. Dec 16 13:03:25.030295 kernel: Guest personality initialized and is inactive Dec 16 13:03:25.030306 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:03:25.030320 kernel: Initialized host personality Dec 16 13:03:25.030332 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:03:25.030343 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:03:25.030356 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 13:03:25.030367 kernel: audit: type=1334 audit(1765890204.567:90): prog-id=12 op=LOAD Dec 16 13:03:25.030380 kernel: audit: type=1334 audit(1765890204.567:91): prog-id=3 op=UNLOAD Dec 16 13:03:25.030392 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:03:25.030403 kernel: audit: type=1334 audit(1765890204.567:92): prog-id=13 op=LOAD Dec 16 13:03:25.030416 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:03:25.030431 kernel: audit: type=1334 audit(1765890204.567:93): prog-id=14 op=LOAD Dec 16 13:03:25.030445 kernel: audit: type=1334 audit(1765890204.567:94): prog-id=4 op=UNLOAD Dec 16 13:03:25.030457 kernel: audit: type=1334 audit(1765890204.567:95): prog-id=5 op=UNLOAD Dec 16 13:03:25.030471 kernel: audit: type=1131 audit(1765890204.568:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.030482 kernel: audit: type=1334 audit(1765890204.587:97): prog-id=12 op=UNLOAD Dec 16 13:03:25.030498 kernel: audit: type=1130 audit(1765890204.592:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.030514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:25.030529 kernel: audit: type=1131 audit(1765890204.592:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.030545 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:03:25.030557 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:03:25.030572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:03:25.030584 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:03:25.030596 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:03:25.030606 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:03:25.030617 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:03:25.030629 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:03:25.030641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:25.030652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:25.030666 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:03:25.030678 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:03:25.030690 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:03:25.030702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:25.030714 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:03:25.030728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:25.030742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:25.030753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:03:25.030764 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:03:25.030777 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:25.030789 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:03:25.030800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:25.030811 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:25.030824 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:03:25.030835 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:25.030885 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:25.030897 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:03:25.030910 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:03:25.030926 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:03:25.030938 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:03:25.030950 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:03:25.030961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:25.030974 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:03:25.030987 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:03:25.030998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:25.031009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:25.031020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:03:25.031031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:03:25.031041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:03:25.031052 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:03:25.031065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:25.031076 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:03:25.031088 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:03:25.031098 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:03:25.031110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:03:25.031122 systemd[1]: Reached target machines.target - Containers. Dec 16 13:03:25.031135 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:03:25.031146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:25.031158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:25.031170 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:03:25.031181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:25.031195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:25.031207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:25.031219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:03:25.031231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:25.031242 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:03:25.031253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:03:25.031264 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:03:25.031276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:03:25.031288 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:03:25.031302 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:25.031315 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:25.031327 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:25.031337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:25.031349 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:03:25.031360 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:03:25.031373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:25.031386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:25.031398 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:03:25.031410 kernel: fuse: init (API version 7.41) Dec 16 13:03:25.031420 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:03:25.031431 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:03:25.031464 systemd-journald[2040]: Collecting audit messages is enabled. Dec 16 13:03:25.031492 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:03:25.031505 systemd-journald[2040]: Journal started Dec 16 13:03:25.031533 systemd-journald[2040]: Runtime Journal (/run/log/journal/f13f2752ff5746a986ccd503e3d135be) is 8M, max 158.5M, 150.5M free. Dec 16 13:03:24.722000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:03:24.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:24.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:24.930000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:03:24.930000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:03:24.930000 audit: BPF prog-id=15 op=LOAD Dec 16 13:03:24.930000 audit: BPF prog-id=16 op=LOAD Dec 16 13:03:24.931000 audit: BPF prog-id=17 op=LOAD Dec 16 13:03:25.024000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:03:25.024000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe6c44b690 a2=4000 a3=0 items=0 ppid=1 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:03:25.024000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:03:24.557792 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:03:24.568575 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:03:24.568973 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:03:25.039699 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:25.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.039370 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:03:25.041300 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:03:25.044057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:25.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.046910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:03:25.047088 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:03:25.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.050179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:25.050346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:25.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.053238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:25.053455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.059027 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:03:25.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.059214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:03:25.061673 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:25.061835 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:25.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.067334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:25.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.070365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:25.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.074397 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:03:25.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.078335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:03:25.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.081008 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:03:25.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.090769 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:25.094785 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:03:25.096933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:03:25.096961 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:25.100171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:03:25.104744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:25.104923 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:25.108972 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:03:25.114010 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:03:25.116545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:25.119956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:03:25.123972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:25.125606 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:25.131017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:03:25.140133 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:03:25.160931 systemd-journald[2040]: Time spent on flushing to /var/log/journal/f13f2752ff5746a986ccd503e3d135be is 20.312ms for 1112 entries. Dec 16 13:03:25.160931 systemd-journald[2040]: System Journal (/var/log/journal/f13f2752ff5746a986ccd503e3d135be) is 8M, max 2.2G, 2.2G free. Dec 16 13:03:25.229789 systemd-journald[2040]: Received client request to flush runtime journal. Dec 16 13:03:25.229832 kernel: ACPI: bus type drm_connector registered Dec 16 13:03:25.229870 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 13:03:25.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.178689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:25.181711 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:25.181839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:25.193053 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:03:25.195921 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:03:25.201580 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:03:25.204464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:25.231203 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:03:25.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.259322 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:03:25.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.362854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:03:25.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.365000 audit: BPF prog-id=18 op=LOAD Dec 16 13:03:25.365000 audit: BPF prog-id=19 op=LOAD Dec 16 13:03:25.365000 audit: BPF prog-id=20 op=LOAD Dec 16 13:03:25.368444 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:03:25.377000 audit: BPF prog-id=21 op=LOAD Dec 16 13:03:25.380977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:25.386966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:25.457000 audit: BPF prog-id=22 op=LOAD Dec 16 13:03:25.457000 audit: BPF prog-id=23 op=LOAD Dec 16 13:03:25.458000 audit: BPF prog-id=24 op=LOAD Dec 16 13:03:25.459348 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:03:25.461000 audit: BPF prog-id=25 op=LOAD Dec 16 13:03:25.461000 audit: BPF prog-id=26 op=LOAD Dec 16 13:03:25.461000 audit: BPF prog-id=27 op=LOAD Dec 16 13:03:25.463518 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:03:25.474061 systemd-tmpfiles[2112]: ACLs are not supported, ignoring. Dec 16 13:03:25.474079 systemd-tmpfiles[2112]: ACLs are not supported, ignoring. Dec 16 13:03:25.479167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:25.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.515602 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:03:25.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.550211 systemd-nsresourced[2114]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:03:25.551489 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:03:25.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.574619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:03:25.577119 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:03:25.584939 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:03:25.597393 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:03:25.601033 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:03:25.660857 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 13:03:25.670935 systemd-oomd[2110]: No swap; memory pressure usage will be degraded Dec 16 13:03:25.671535 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:03:25.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.720794 systemd-resolved[2111]: Positive Trust Anchors: Dec 16 13:03:25.720813 systemd-resolved[2111]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:25.720817 systemd-resolved[2111]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:03:25.720863 systemd-resolved[2111]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:25.734875 kernel: loop3: detected capacity change from 0 to 27736 Dec 16 13:03:25.841399 systemd-resolved[2111]: Using system hostname 'ci-4515.1.0-a-968fde264e'. Dec 16 13:03:25.843088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:25.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.845787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:25.886966 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:03:25.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:25.889000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:03:25.889000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:03:25.889000 audit: BPF prog-id=28 op=LOAD Dec 16 13:03:25.889000 audit: BPF prog-id=29 op=LOAD Dec 16 13:03:25.891429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:25.921766 systemd-udevd[2138]: Using default interface naming scheme 'v257'. Dec 16 13:03:26.089374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:26.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.091000 audit: BPF prog-id=30 op=LOAD Dec 16 13:03:26.094990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:26.154287 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:03:26.210446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:03:26.215185 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:03:26.222542 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:03:26.222599 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:03:26.222620 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:03:26.224883 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:03:26.227089 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:03:26.227141 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:03:26.232974 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:03:26.235222 systemd-networkd[2142]: lo: Link UP Dec 16 13:03:26.235234 systemd-networkd[2142]: lo: Gained carrier Dec 16 13:03:26.237791 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:26.240265 systemd-networkd[2142]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:26.240275 systemd-networkd[2142]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:26.240638 systemd[1]: Reached target network.target - Network. Dec 16 13:03:26.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.247864 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:03:26.246867 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:03:26.253974 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:03:26.256262 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:03:26.260899 kernel: hv_netvsc f8615163-0000-1000-2000-6045bd123eeb eth0: Data path switched to VF: enP30832s1 Dec 16 13:03:26.263241 systemd-networkd[2142]: enP30832s1: Link UP Dec 16 13:03:26.263350 systemd-networkd[2142]: eth0: Link UP Dec 16 13:03:26.263353 systemd-networkd[2142]: eth0: Gained carrier Dec 16 13:03:26.263369 systemd-networkd[2142]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:26.270465 systemd-networkd[2142]: enP30832s1: Gained carrier Dec 16 13:03:26.275966 systemd-networkd[2142]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:26.298911 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:03:26.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.322417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:26.346294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:26.346926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:26.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.352964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:26.423868 kernel: loop4: detected capacity change from 0 to 119256 Dec 16 13:03:26.526512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:03:26.529991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:03:26.568868 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:03:26.598673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:03:26.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.830695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:26.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:26.867869 kernel: loop5: detected capacity change from 0 to 111544 Dec 16 13:03:26.880862 kernel: loop6: detected capacity change from 0 to 229808 Dec 16 13:03:26.894871 kernel: loop7: detected capacity change from 0 to 27736 Dec 16 13:03:26.906862 kernel: loop1: detected capacity change from 0 to 119256 Dec 16 13:03:26.919992 (sd-merge)[2227]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 13:03:26.923058 (sd-merge)[2227]: Merged extensions into '/usr'. Dec 16 13:03:26.927199 systemd[1]: Reload requested from client PID 2095 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:03:26.927212 systemd[1]: Reloading... Dec 16 13:03:26.977913 zram_generator::config[2259]: No configuration found. Dec 16 13:03:27.220182 systemd[1]: Reloading finished in 292 ms. Dec 16 13:03:27.240018 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:03:27.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.260676 systemd[1]: Starting ensure-sysext.service... Dec 16 13:03:27.262578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:27.264000 audit: BPF prog-id=31 op=LOAD Dec 16 13:03:27.264000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:03:27.264000 audit: BPF prog-id=32 op=LOAD Dec 16 13:03:27.264000 audit: BPF prog-id=33 op=LOAD Dec 16 13:03:27.264000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:03:27.264000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:03:27.266000 audit: BPF prog-id=34 op=LOAD Dec 16 13:03:27.271000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:03:27.272000 audit: BPF prog-id=35 op=LOAD Dec 16 13:03:27.272000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:03:27.272000 audit: BPF prog-id=36 op=LOAD Dec 16 13:03:27.272000 audit: BPF prog-id=37 op=LOAD Dec 16 13:03:27.272000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:03:27.272000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:03:27.272000 audit: BPF prog-id=38 op=LOAD Dec 16 13:03:27.272000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:03:27.273000 audit: BPF prog-id=39 op=LOAD Dec 16 13:03:27.273000 audit: BPF prog-id=40 op=LOAD Dec 16 13:03:27.273000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:03:27.273000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:03:27.273000 audit: BPF prog-id=41 op=LOAD Dec 16 13:03:27.273000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:03:27.273000 audit: BPF prog-id=42 op=LOAD Dec 16 13:03:27.273000 audit: BPF prog-id=43 op=LOAD Dec 16 13:03:27.273000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:03:27.273000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:03:27.274000 audit: BPF prog-id=44 op=LOAD Dec 16 13:03:27.274000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:03:27.274000 audit: BPF prog-id=45 op=LOAD Dec 16 13:03:27.274000 audit: BPF prog-id=46 op=LOAD Dec 16 13:03:27.274000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:03:27.274000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:03:27.280938 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:03:27.280955 systemd[1]: Reloading... Dec 16 13:03:27.299679 systemd-tmpfiles[2319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:03:27.299901 systemd-tmpfiles[2319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:03:27.300167 systemd-tmpfiles[2319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:03:27.301265 systemd-tmpfiles[2319]: ACLs are not supported, ignoring. Dec 16 13:03:27.301364 systemd-tmpfiles[2319]: ACLs are not supported, ignoring. Dec 16 13:03:27.332863 zram_generator::config[2353]: No configuration found. Dec 16 13:03:27.335372 systemd-tmpfiles[2319]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:27.335389 systemd-tmpfiles[2319]: Skipping /boot Dec 16 13:03:27.346837 systemd-tmpfiles[2319]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:27.346867 systemd-tmpfiles[2319]: Skipping /boot Dec 16 13:03:27.530614 systemd[1]: Reloading finished in 249 ms. Dec 16 13:03:27.557000 audit: BPF prog-id=47 op=LOAD Dec 16 13:03:27.557000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:03:27.557000 audit: BPF prog-id=48 op=LOAD Dec 16 13:03:27.557000 audit: BPF prog-id=49 op=LOAD Dec 16 13:03:27.557000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:03:27.557000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:03:27.557000 audit: BPF prog-id=50 op=LOAD Dec 16 13:03:27.557000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:03:27.558000 audit: BPF prog-id=51 op=LOAD Dec 16 13:03:27.558000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:03:27.558000 audit: BPF prog-id=52 op=LOAD Dec 16 13:03:27.558000 audit: BPF prog-id=53 op=LOAD Dec 16 13:03:27.558000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:03:27.558000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:03:27.559000 audit: BPF prog-id=54 op=LOAD Dec 16 13:03:27.559000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:03:27.559000 audit: BPF prog-id=55 op=LOAD Dec 16 13:03:27.559000 audit: BPF prog-id=56 op=LOAD Dec 16 13:03:27.559000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:03:27.559000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:03:27.559000 audit: BPF prog-id=57 op=LOAD Dec 16 13:03:27.559000 audit: BPF prog-id=58 op=LOAD Dec 16 13:03:27.559000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:03:27.559000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:03:27.560000 audit: BPF prog-id=59 op=LOAD Dec 16 13:03:27.560000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:03:27.561000 audit: BPF prog-id=60 op=LOAD Dec 16 13:03:27.561000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:03:27.561000 audit: BPF prog-id=61 op=LOAD Dec 16 13:03:27.561000 audit: BPF prog-id=62 op=LOAD Dec 16 13:03:27.561000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:03:27.561000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:03:27.569757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:27.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.578730 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:27.582374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:03:27.586201 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:03:27.592367 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:03:27.595816 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:03:27.602091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.602260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:27.606628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:27.613053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:27.616568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:27.619034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:27.619236 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:27.619348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:27.619452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.623454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.623624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:27.624944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:27.625124 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:27.625227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:27.625327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.629000 audit[2417]: SYSTEM_BOOT pid=2417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.635644 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:27.636670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:27.639288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:27.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.639505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:27.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.642442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:27.642625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:27.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.648630 systemd[1]: Finished ensure-sysext.service. Dec 16 13:03:27.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.651264 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:03:27.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.656247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.656403 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:27.657174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:27.661284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:27.661380 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:03:27.661419 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:27.661458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:27.661499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:27.661530 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:03:27.664931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:27.668018 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:27.668211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:27.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:27.679248 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:03:27.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.018016 systemd-networkd[2142]: eth0: Gained IPv6LL Dec 16 13:03:28.020248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:03:28.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:03:28.022568 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:03:28.110000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:03:28.110000 audit[2449]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5ed85b90 a2=420 a3=0 items=0 ppid=2413 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:03:28.110000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:03:28.112721 augenrules[2449]: No rules Dec 16 13:03:28.112634 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:28.113051 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:28.626005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:03:28.630103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:03:36.837454 ldconfig[2415]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:03:37.007613 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:03:37.010810 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:03:37.030867 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:03:37.034090 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:37.037018 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:03:37.038487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:03:37.040343 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:03:37.042102 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:03:37.043553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:03:37.046926 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:03:37.049953 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:03:37.051366 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:03:37.052873 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:03:37.052892 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:37.054079 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:37.058101 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:03:37.060797 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:03:37.065754 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:03:37.067201 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:03:37.069914 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:03:37.091289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:03:37.093369 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:03:37.096512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:03:37.099780 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:37.101150 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:37.103946 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:37.103979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:03:37.123591 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:03:37.128412 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:03:37.133549 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:03:37.136676 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:03:37.142031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:03:37.149832 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:03:37.154030 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:03:37.157947 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:03:37.160487 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:03:37.162554 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:03:37.167822 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:03:37.170629 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:03:37.172650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:37.176914 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:03:37.181048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:03:37.185688 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:03:37.189823 jq[2466]: false Dec 16 13:03:37.192081 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:03:37.201902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:03:37.210039 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:03:37.209790 oslogin_cache_refresh[2471]: Refreshing passwd entry cache Dec 16 13:03:37.211155 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Refreshing passwd entry cache Dec 16 13:03:37.212407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:03:37.214483 KVP[2472]: KVP starting; pid is:2472 Dec 16 13:03:37.218078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:03:37.224491 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:03:37.226628 KVP[2472]: KVP LIC Version: 3.1 Dec 16 13:03:37.226863 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:03:37.229882 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:03:37.236351 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:03:37.239333 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:03:37.239578 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:03:37.240281 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Failure getting users, quitting Dec 16 13:03:37.240342 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:37.240342 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Refreshing group entry cache Dec 16 13:03:37.240279 oslogin_cache_refresh[2471]: Failure getting users, quitting Dec 16 13:03:37.240296 oslogin_cache_refresh[2471]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:03:37.240336 oslogin_cache_refresh[2471]: Refreshing group entry cache Dec 16 13:03:37.244059 extend-filesystems[2469]: Found /dev/nvme0n1p6 Dec 16 13:03:37.245599 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:03:37.247416 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:03:37.250947 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Failure getting groups, quitting Dec 16 13:03:37.250947 google_oslogin_nss_cache[2471]: oslogin_cache_refresh[2471]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:37.250889 oslogin_cache_refresh[2471]: Failure getting groups, quitting Dec 16 13:03:37.250900 oslogin_cache_refresh[2471]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:03:37.253417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:03:37.253886 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:03:37.258888 jq[2486]: true Dec 16 13:03:37.260939 chronyd[2461]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:03:37.266255 chronyd[2461]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:03:37.266399 chronyd[2461]: Loaded seccomp filter (level 2) Dec 16 13:03:37.266575 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:03:37.284402 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:03:37.284605 jq[2498]: true Dec 16 13:03:37.288145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:03:37.322496 extend-filesystems[2469]: Found /dev/nvme0n1p9 Dec 16 13:03:37.329266 extend-filesystems[2469]: Checking size of /dev/nvme0n1p9 Dec 16 13:03:37.332162 update_engine[2483]: I20251216 13:03:37.332093 2483 main.cc:92] Flatcar Update Engine starting Dec 16 13:03:37.370142 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:03:37.446617 extend-filesystems[2469]: Resized partition /dev/nvme0n1p9 Dec 16 13:03:37.480191 tar[2492]: linux-amd64/LICENSE Dec 16 13:03:37.480440 tar[2492]: linux-amd64/helm Dec 16 13:03:37.487784 systemd-logind[2482]: New seat seat0. Dec 16 13:03:37.492932 systemd-logind[2482]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 13:03:37.493137 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:03:37.508009 extend-filesystems[2546]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:03:37.824066 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 13:03:39.246329 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.362 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.367 INFO Fetch successful Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.368 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.372 INFO Fetch successful Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.372 INFO Fetching http://168.63.129.16/machine/554bd329-fae4-4105-b317-62af6c06c0b0/4e6bc6e6%2D2ae4%2D49c9%2D82be%2Df82765387941.%5Fci%2D4515.1.0%2Da%2D968fde264e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.373 INFO Fetch successful Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.373 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:03:39.246416 coreos-metadata[2463]: Dec 16 13:03:38.381 INFO Fetch successful Dec 16 13:03:38.140116 dbus-daemon[2464]: [system] SELinux support is enabled Dec 16 13:03:39.247065 update_engine[2483]: I20251216 13:03:38.144594 2483 update_check_scheduler.cc:74] Next update check in 9m50s Dec 16 13:03:38.140413 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:03:38.151574 dbus-daemon[2464]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:03:38.145688 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:03:38.145717 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:03:38.148410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:03:38.148430 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:03:38.151940 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:03:38.154748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:03:38.406302 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:03:38.409219 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:03:39.023671 locksmithd[2574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:03:39.285400 tar[2492]: linux-amd64/README.md Dec 16 13:03:39.894423 extend-filesystems[2546]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:03:39.894423 extend-filesystems[2546]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 13:03:39.894423 extend-filesystems[2546]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 16 13:03:39.799407 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:03:39.898780 extend-filesystems[2469]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:03:39.799644 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:03:39.895411 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:03:39.994238 sshd_keygen[2494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:03:40.018938 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:03:40.031125 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:03:40.036448 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:03:40.048761 bash[2538]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:03:40.051004 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:03:40.054324 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:03:40.059759 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:03:40.060505 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:03:40.068970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:03:40.081629 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:03:40.116570 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:03:40.122395 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:03:40.127732 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:03:40.130563 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:03:40.423000 containerd[2510]: time="2025-12-16T13:03:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:03:40.423822 containerd[2510]: time="2025-12-16T13:03:40.423775481Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:03:40.436258 containerd[2510]: time="2025-12-16T13:03:40.436211565Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.379µs" Dec 16 13:03:40.436258 containerd[2510]: time="2025-12-16T13:03:40.436241020Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:03:40.436347 containerd[2510]: time="2025-12-16T13:03:40.436278772Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:03:40.436347 containerd[2510]: time="2025-12-16T13:03:40.436291043Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436410530Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436428537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436473727Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436484469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436673572Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436685259Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436694928Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436703393Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436821933Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436830925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.436902013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437496 containerd[2510]: time="2025-12-16T13:03:40.437046531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437755 containerd[2510]: time="2025-12-16T13:03:40.437066090Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:03:40.437755 containerd[2510]: time="2025-12-16T13:03:40.437075988Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:03:40.437755 containerd[2510]: time="2025-12-16T13:03:40.437117279Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:03:40.437755 containerd[2510]: time="2025-12-16T13:03:40.437388013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:03:40.437755 containerd[2510]: time="2025-12-16T13:03:40.437438031Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:03:40.491624 containerd[2510]: time="2025-12-16T13:03:40.491586191Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:03:40.491786 containerd[2510]: time="2025-12-16T13:03:40.491770805Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:03:40.491997 containerd[2510]: time="2025-12-16T13:03:40.491970543Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:03:40.491997 containerd[2510]: time="2025-12-16T13:03:40.491993746Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:03:40.492066 containerd[2510]: time="2025-12-16T13:03:40.492008798Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:03:40.492087 containerd[2510]: time="2025-12-16T13:03:40.492068092Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:03:40.492106 containerd[2510]: time="2025-12-16T13:03:40.492095951Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:03:40.492129 containerd[2510]: time="2025-12-16T13:03:40.492108729Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:03:40.492149 containerd[2510]: time="2025-12-16T13:03:40.492128318Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:03:40.492149 containerd[2510]: time="2025-12-16T13:03:40.492141591Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:03:40.492190 containerd[2510]: time="2025-12-16T13:03:40.492154854Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:03:40.492190 containerd[2510]: time="2025-12-16T13:03:40.492166079Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:03:40.492190 containerd[2510]: time="2025-12-16T13:03:40.492176054Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:03:40.492241 containerd[2510]: time="2025-12-16T13:03:40.492188769Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492319043Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492341280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492355200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492366481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492377833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492387131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492399498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492409373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492420489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492432000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492442107Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492465610Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492527130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492540798Z" level=info msg="Start snapshots syncer" Dec 16 13:03:40.493124 containerd[2510]: time="2025-12-16T13:03:40.492558792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:03:40.493452 containerd[2510]: time="2025-12-16T13:03:40.492830053Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:03:40.493452 containerd[2510]: time="2025-12-16T13:03:40.492914941Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.492959534Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493039335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493057980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493068877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493079509Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493090993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493101274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493112145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493153005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493164308Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493187849Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493203651Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:03:40.493590 containerd[2510]: time="2025-12-16T13:03:40.493213526Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493223389Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493232118Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493246403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493259158Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493279525Z" level=info msg="runtime interface created" Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493286073Z" level=info msg="created NRI interface" Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493299194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493311404Z" level=info msg="Connect containerd service" Dec 16 13:03:40.493825 containerd[2510]: time="2025-12-16T13:03:40.493332123Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:03:40.494869 containerd[2510]: time="2025-12-16T13:03:40.494452252Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:03:40.625999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:40.634074 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:03:41.194484 kubelet[2632]: E1216 13:03:41.194416 2632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:03:41.196448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:03:41.196583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:03:41.196972 systemd[1]: kubelet.service: Consumed 962ms CPU time, 268.9M memory peak. Dec 16 13:03:41.316826 containerd[2510]: time="2025-12-16T13:03:41.316707583Z" level=info msg="Start subscribing containerd event" Dec 16 13:03:41.316826 containerd[2510]: time="2025-12-16T13:03:41.316764688Z" level=info msg="Start recovering state" Dec 16 13:03:41.317249 containerd[2510]: time="2025-12-16T13:03:41.317214783Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:03:41.317325 containerd[2510]: time="2025-12-16T13:03:41.317315101Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319817401Z" level=info msg="Start event monitor" Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319864192Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319873131Z" level=info msg="Start streaming server" Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319882864Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319893925Z" level=info msg="runtime interface starting up..." Dec 16 13:03:41.319892 containerd[2510]: time="2025-12-16T13:03:41.319900052Z" level=info msg="starting plugins..." Dec 16 13:03:41.320071 containerd[2510]: time="2025-12-16T13:03:41.319913461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:03:41.320213 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:03:41.322744 containerd[2510]: time="2025-12-16T13:03:41.322719019Z" level=info msg="containerd successfully booted in 0.900594s" Dec 16 13:03:41.323554 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:03:41.326932 systemd[1]: Startup finished in 4.623s (kernel) + 10.607s (initrd) + 19.370s (userspace) = 34.601s. Dec 16 13:03:42.254278 login[2616]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 13:03:42.256028 login[2617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:03:42.262067 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:03:42.263410 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:03:42.268992 systemd-logind[2482]: New session 1 of user core. Dec 16 13:03:42.279020 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:03:42.281489 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:03:42.410749 (systemd)[2650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:03:42.413455 systemd-logind[2482]: New session c1 of user core. Dec 16 13:03:42.548357 systemd[2650]: Queued start job for default target default.target. Dec 16 13:03:42.554717 systemd[2650]: Created slice app.slice - User Application Slice. Dec 16 13:03:42.554757 systemd[2650]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:03:42.554771 systemd[2650]: Reached target paths.target - Paths. Dec 16 13:03:42.554944 systemd[2650]: Reached target timers.target - Timers. Dec 16 13:03:42.555900 systemd[2650]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:03:42.556591 systemd[2650]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:03:42.567268 systemd[2650]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:03:42.568140 systemd[2650]: Reached target sockets.target - Sockets. Dec 16 13:03:42.568516 systemd[2650]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:03:42.568606 systemd[2650]: Reached target basic.target - Basic System. Dec 16 13:03:42.568645 systemd[2650]: Reached target default.target - Main User Target. Dec 16 13:03:42.568669 systemd[2650]: Startup finished in 148ms. Dec 16 13:03:42.568905 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:03:42.574012 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:03:43.254614 login[2616]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:03:43.259195 systemd-logind[2482]: New session 2 of user core. Dec 16 13:03:43.264208 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:03:45.966802 waagent[2614]: 2025-12-16T13:03:45.966715Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.967239Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.967340Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.968131Z INFO Daemon Daemon Run daemon Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.968416Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.968683Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.968939Z INFO Daemon Daemon Activate resource disk Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.969092Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.970709Z INFO Daemon Daemon Found device: None Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.970947Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.971293Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.972315Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:03:45.975676 waagent[2614]: 2025-12-16T13:03:45.972632Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:03:45.999118 waagent[2614]: 2025-12-16T13:03:45.981356Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:03:45.999118 waagent[2614]: 2025-12-16T13:03:45.982008Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:03:45.999118 waagent[2614]: 2025-12-16T13:03:45.982537Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:03:45.999118 waagent[2614]: 2025-12-16T13:03:45.983024Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:03:46.094959 waagent[2614]: 2025-12-16T13:03:46.094351Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:03:46.123769 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:03:46.126131 waagent[2614]: 2025-12-16T13:03:46.126073Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:03:46.127357 waagent[2614]: 2025-12-16T13:03:46.126723Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:03:46.127357 waagent[2614]: 2025-12-16T13:03:46.127134Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:03:46.130132 waagent[2614]: 2025-12-16T13:03:46.127363Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:03:46.130132 waagent[2614]: 2025-12-16T13:03:46.127528Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:03:46.130132 waagent[2614]: 2025-12-16T13:03:46.127691Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:03:46.154289 waagent[2614]: 2025-12-16T13:03:46.154252Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:03:46.155869 waagent[2614]: 2025-12-16T13:03:46.154770Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:03:46.155869 waagent[2614]: 2025-12-16T13:03:46.154928Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:03:46.262838 waagent[2614]: 2025-12-16T13:03:46.262674Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:03:46.265164 waagent[2614]: 2025-12-16T13:03:46.263159Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:03:46.270333 waagent[2614]: 2025-12-16T13:03:46.270296Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:03:46.289807 waagent[2614]: 2025-12-16T13:03:46.289773Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Dec 16 13:03:46.290517 waagent[2614]: 2025-12-16T13:03:46.290479Z INFO Daemon Dec 16 13:03:46.290601 waagent[2614]: 2025-12-16T13:03:46.290571Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2dd3f5ca-e80a-44d2-af9f-a1514a69e6c6 eTag: 731659803154562376 source: Fabric] Dec 16 13:03:46.292997 waagent[2614]: 2025-12-16T13:03:46.290880Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:03:46.292997 waagent[2614]: 2025-12-16T13:03:46.291245Z INFO Daemon Dec 16 13:03:46.292997 waagent[2614]: 2025-12-16T13:03:46.291382Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:03:46.301063 waagent[2614]: 2025-12-16T13:03:46.296447Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:03:46.460427 waagent[2614]: 2025-12-16T13:03:46.460356Z INFO Daemon Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:03:46.463020 waagent[2614]: 2025-12-16T13:03:46.462978Z INFO Daemon Fetch goal state completed Dec 16 13:03:46.470885 waagent[2614]: 2025-12-16T13:03:46.470836Z INFO Daemon Daemon Starting provisioning Dec 16 13:03:46.471839 waagent[2614]: 2025-12-16T13:03:46.471807Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:03:46.472792 waagent[2614]: 2025-12-16T13:03:46.472763Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-968fde264e] Dec 16 13:03:46.476270 waagent[2614]: 2025-12-16T13:03:46.476231Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-968fde264e] Dec 16 13:03:46.477623 waagent[2614]: 2025-12-16T13:03:46.477585Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:03:46.479258 waagent[2614]: 2025-12-16T13:03:46.479228Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:03:46.486613 systemd-networkd[2142]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:03:46.486621 systemd-networkd[2142]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:46.486684 systemd-networkd[2142]: eth0: DHCP lease lost Dec 16 13:03:46.496181 waagent[2614]: 2025-12-16T13:03:46.496136Z INFO Daemon Daemon Create user account if not exists Dec 16 13:03:46.497460 waagent[2614]: 2025-12-16T13:03:46.497096Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:03:46.497460 waagent[2614]: 2025-12-16T13:03:46.497314Z INFO Daemon Daemon Configure sudoer Dec 16 13:03:46.501683 waagent[2614]: 2025-12-16T13:03:46.501640Z INFO Daemon Daemon Configure sshd Dec 16 13:03:46.501891 systemd-networkd[2142]: eth0: DHCPv4 address 10.200.4.31/24, gateway 10.200.4.1 acquired from 168.63.129.16 Dec 16 13:03:46.506728 waagent[2614]: 2025-12-16T13:03:46.506687Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:03:46.509331 waagent[2614]: 2025-12-16T13:03:46.507481Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:03:47.592316 waagent[2614]: 2025-12-16T13:03:47.592253Z INFO Daemon Daemon Provisioning complete Dec 16 13:03:47.606781 waagent[2614]: 2025-12-16T13:03:47.606743Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:03:47.608533 waagent[2614]: 2025-12-16T13:03:47.607417Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:03:47.608533 waagent[2614]: 2025-12-16T13:03:47.607765Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:03:47.715218 waagent[2703]: 2025-12-16T13:03:47.715136Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:03:47.715595 waagent[2703]: 2025-12-16T13:03:47.715257Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 13:03:47.715595 waagent[2703]: 2025-12-16T13:03:47.715299Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:03:47.715595 waagent[2703]: 2025-12-16T13:03:47.715338Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:03:47.757369 waagent[2703]: 2025-12-16T13:03:47.757304Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:03:47.757516 waagent[2703]: 2025-12-16T13:03:47.757489Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:47.757569 waagent[2703]: 2025-12-16T13:03:47.757549Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:47.764969 waagent[2703]: 2025-12-16T13:03:47.764910Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:03:47.773638 waagent[2703]: 2025-12-16T13:03:47.773601Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Dec 16 13:03:47.774026 waagent[2703]: 2025-12-16T13:03:47.773990Z INFO ExtHandler Dec 16 13:03:47.774081 waagent[2703]: 2025-12-16T13:03:47.774050Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 76139db9-ccf5-4db2-a103-39fa721fe742 eTag: 731659803154562376 source: Fabric] Dec 16 13:03:47.774280 waagent[2703]: 2025-12-16T13:03:47.774253Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:03:47.774624 waagent[2703]: 2025-12-16T13:03:47.774595Z INFO ExtHandler Dec 16 13:03:47.774662 waagent[2703]: 2025-12-16T13:03:47.774638Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:03:47.778363 waagent[2703]: 2025-12-16T13:03:47.778330Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:03:47.844660 waagent[2703]: 2025-12-16T13:03:47.844565Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E1C4F9C963FEFF30BCB12913A23AF3DEAF4BCB02', 'hasPrivateKey': True} Dec 16 13:03:47.845028 waagent[2703]: 2025-12-16T13:03:47.844998Z INFO ExtHandler Fetch goal state completed Dec 16 13:03:47.857728 waagent[2703]: 2025-12-16T13:03:47.857679Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 13:03:47.861994 waagent[2703]: 2025-12-16T13:03:47.861945Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2703 Dec 16 13:03:47.862109 waagent[2703]: 2025-12-16T13:03:47.862076Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:03:47.862349 waagent[2703]: 2025-12-16T13:03:47.862323Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:03:47.863435 waagent[2703]: 2025-12-16T13:03:47.863400Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:03:47.863724 waagent[2703]: 2025-12-16T13:03:47.863693Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:03:47.863838 waagent[2703]: 2025-12-16T13:03:47.863814Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:03:47.864276 waagent[2703]: 2025-12-16T13:03:47.864246Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:03:47.886255 waagent[2703]: 2025-12-16T13:03:47.886227Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:03:47.886391 waagent[2703]: 2025-12-16T13:03:47.886368Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:03:47.891863 waagent[2703]: 2025-12-16T13:03:47.891764Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:03:47.897042 systemd[1]: Reload requested from client PID 2718 ('systemctl') (unit waagent.service)... Dec 16 13:03:47.897056 systemd[1]: Reloading... Dec 16 13:03:47.978939 zram_generator::config[2760]: No configuration found. Dec 16 13:03:48.170285 systemd[1]: Reloading finished in 272 ms. Dec 16 13:03:48.187903 waagent[2703]: 2025-12-16T13:03:48.187054Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:03:48.187903 waagent[2703]: 2025-12-16T13:03:48.187202Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:03:48.190250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:03:49.073656 waagent[2703]: 2025-12-16T13:03:49.073582Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:03:49.074024 waagent[2703]: 2025-12-16T13:03:49.073975Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:03:49.074747 waagent[2703]: 2025-12-16T13:03:49.074710Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:03:49.074929 waagent[2703]: 2025-12-16T13:03:49.074895Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:49.075094 waagent[2703]: 2025-12-16T13:03:49.074970Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:49.075280 waagent[2703]: 2025-12-16T13:03:49.075256Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:03:49.075435 waagent[2703]: 2025-12-16T13:03:49.075412Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:03:49.075502 waagent[2703]: 2025-12-16T13:03:49.075475Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:03:49.075707 waagent[2703]: 2025-12-16T13:03:49.075681Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:03:49.075707 waagent[2703]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:03:49.075707 waagent[2703]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:03:49.075707 waagent[2703]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:03:49.075707 waagent[2703]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:49.075707 waagent[2703]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:49.075707 waagent[2703]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:03:49.076110 waagent[2703]: 2025-12-16T13:03:49.076031Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:03:49.076261 waagent[2703]: 2025-12-16T13:03:49.076228Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:03:49.076491 waagent[2703]: 2025-12-16T13:03:49.076465Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:03:49.076524 waagent[2703]: 2025-12-16T13:03:49.076500Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:03:49.076831 waagent[2703]: 2025-12-16T13:03:49.076787Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:03:49.076959 waagent[2703]: 2025-12-16T13:03:49.076936Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:03:49.077107 waagent[2703]: 2025-12-16T13:03:49.077044Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:03:49.077178 waagent[2703]: 2025-12-16T13:03:49.077141Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:03:49.077294 waagent[2703]: 2025-12-16T13:03:49.077276Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:03:49.085898 waagent[2703]: 2025-12-16T13:03:49.085837Z INFO ExtHandler ExtHandler Dec 16 13:03:49.085978 waagent[2703]: 2025-12-16T13:03:49.085915Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8b33b80d-30de-4eb3-97bd-36adbfe299c7 correlation 79025f12-6578-461c-a78e-072ece17963b created: 2025-12-16T13:02:44.086517Z] Dec 16 13:03:49.086215 waagent[2703]: 2025-12-16T13:03:49.086188Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:03:49.086809 waagent[2703]: 2025-12-16T13:03:49.086706Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 13:03:49.155330 waagent[2703]: 2025-12-16T13:03:49.154806Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:03:49.155330 waagent[2703]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:03:49.155330 waagent[2703]: 2025-12-16T13:03:49.155227Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 555C1482-70DA-4B4E-AB26-4B79F95AF109;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:03:49.186281 waagent[2703]: 2025-12-16T13:03:49.186229Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:03:49.186281 waagent[2703]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:03:49.186281 waagent[2703]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:03:49.186281 waagent[2703]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:12:3e:eb brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx6045bd123eeb Dec 16 13:03:49.186281 waagent[2703]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:12:3e:eb brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:03:49.186281 waagent[2703]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:03:49.186281 waagent[2703]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:03:49.186281 waagent[2703]: 2: eth0 inet 10.200.4.31/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:03:49.186281 waagent[2703]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:03:49.186281 waagent[2703]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:03:49.186281 waagent[2703]: 2: eth0 inet6 fe80::6245:bdff:fe12:3eeb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:03:49.278706 waagent[2703]: 2025-12-16T13:03:49.278652Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:03:49.278706 waagent[2703]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.278706 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.278706 waagent[2703]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.278706 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.278706 waagent[2703]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.278706 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.278706 waagent[2703]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:03:49.278706 waagent[2703]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:03:49.278706 waagent[2703]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:03:49.281377 waagent[2703]: 2025-12-16T13:03:49.281325Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:03:49.281377 waagent[2703]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.281377 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.281377 waagent[2703]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.281377 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.281377 waagent[2703]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:03:49.281377 waagent[2703]: pkts bytes target prot opt in out source destination Dec 16 13:03:49.281377 waagent[2703]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:03:49.281377 waagent[2703]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:03:49.281377 waagent[2703]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:03:51.416715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:51.418207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:03:51.974966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:03:51.981042 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:03:52.013566 kubelet[2860]: E1216 13:03:52.013515 2860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:03:52.016704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:03:52.016874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:03:52.017212 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.3M memory peak. Dec 16 13:04:00.443493 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:04:00.444685 systemd[1]: Started sshd@0-10.200.4.31:22-10.200.16.10:44192.service - OpenSSH per-connection server daemon (10.200.16.10:44192). Dec 16 13:04:01.047743 chronyd[2461]: Selected source PHC0 Dec 16 13:04:01.247865 sshd[2868]: Accepted publickey for core from 10.200.16.10 port 44192 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:01.249010 sshd-session[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:01.253611 systemd-logind[2482]: New session 3 of user core. Dec 16 13:04:01.257018 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:04:01.637768 systemd[1]: Started sshd@1-10.200.4.31:22-10.200.16.10:44196.service - OpenSSH per-connection server daemon (10.200.16.10:44196). Dec 16 13:04:02.083229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:04:02.084790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:02.192006 sshd[2874]: Accepted publickey for core from 10.200.16.10 port 44196 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:02.193148 sshd-session[2874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:02.198031 systemd-logind[2482]: New session 4 of user core. Dec 16 13:04:02.207027 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:04:03.401617 sshd[2880]: Connection closed by 10.200.16.10 port 44196 Dec 16 13:04:02.590729 systemd[1]: Started sshd@2-10.200.4.31:22-10.200.16.10:44202.service - OpenSSH per-connection server daemon (10.200.16.10:44202). Dec 16 13:04:03.402165 sshd[2883]: Accepted publickey for core from 10.200.16.10 port 44202 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:03.535825 sshd-session[2874]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:03.536070 sshd-session[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:03.542197 systemd-logind[2482]: New session 5 of user core. Dec 16 13:04:03.551058 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:04:03.551466 systemd[1]: sshd@1-10.200.4.31:22-10.200.16.10:44196.service: Deactivated successfully. Dec 16 13:04:03.555339 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:04:03.557661 systemd-logind[2482]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:04:03.559608 systemd-logind[2482]: Removed session 4. Dec 16 13:04:03.753354 sshd[2889]: Connection closed by 10.200.16.10 port 44202 Dec 16 13:04:03.753928 sshd-session[2883]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:03.756987 systemd[1]: sshd@2-10.200.4.31:22-10.200.16.10:44202.service: Deactivated successfully. Dec 16 13:04:03.758569 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:04:03.760325 systemd-logind[2482]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:04:03.761064 systemd-logind[2482]: Removed session 5. Dec 16 13:04:03.863711 systemd[1]: Started sshd@3-10.200.4.31:22-10.200.16.10:44212.service - OpenSSH per-connection server daemon (10.200.16.10:44212). Dec 16 13:04:04.377246 sshd[2895]: Accepted publickey for core from 10.200.16.10 port 44212 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:04.378409 sshd-session[2895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:04.382952 systemd-logind[2482]: New session 6 of user core. Dec 16 13:04:04.390013 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:04:04.663038 sshd[2898]: Connection closed by 10.200.16.10 port 44212 Dec 16 13:04:04.663896 sshd-session[2895]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:04.666750 systemd[1]: sshd@3-10.200.4.31:22-10.200.16.10:44212.service: Deactivated successfully. Dec 16 13:04:04.668307 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:04:04.670357 systemd-logind[2482]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:04:04.671185 systemd-logind[2482]: Removed session 6. Dec 16 13:04:04.770663 systemd[1]: Started sshd@4-10.200.4.31:22-10.200.16.10:44216.service - OpenSSH per-connection server daemon (10.200.16.10:44216). Dec 16 13:04:05.285691 sshd[2904]: Accepted publickey for core from 10.200.16.10 port 44216 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:05.286779 sshd-session[2904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:05.290924 systemd-logind[2482]: New session 7 of user core. Dec 16 13:04:05.297018 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:04:06.743980 sudo[2908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:04:06.744217 sudo[2908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:06.774612 sudo[2908]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:06.869814 sshd[2907]: Connection closed by 10.200.16.10 port 44216 Dec 16 13:04:06.870538 sshd-session[2904]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:06.873830 systemd[1]: sshd@4-10.200.4.31:22-10.200.16.10:44216.service: Deactivated successfully. Dec 16 13:04:06.876381 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:04:06.877368 systemd-logind[2482]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:04:06.878122 systemd-logind[2482]: Removed session 7. Dec 16 13:04:06.979710 systemd[1]: Started sshd@5-10.200.4.31:22-10.200.16.10:44222.service - OpenSSH per-connection server daemon (10.200.16.10:44222). Dec 16 13:04:07.328540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:07.332070 (kubelet)[2922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:07.369362 kubelet[2922]: E1216 13:04:07.369305 2922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:07.371199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:07.371303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:07.371908 systemd[1]: kubelet.service: Consumed 138ms CPU time, 108.4M memory peak. Dec 16 13:04:07.500616 sshd[2914]: Accepted publickey for core from 10.200.16.10 port 44222 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:07.501764 sshd-session[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.506910 systemd-logind[2482]: New session 8 of user core. Dec 16 13:04:07.514038 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:04:07.697578 sudo[2931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:04:07.697811 sudo[2931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:07.748156 sudo[2931]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:07.753351 sudo[2930]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:04:07.753573 sudo[2930]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:07.761961 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:07.794679 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 16 13:04:07.794753 kernel: audit: type=1305 audit(1765890247.791:242): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:07.791000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:04:07.794836 augenrules[2953]: No rules Dec 16 13:04:07.791000 audit[2953]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda157e1f0 a2=420 a3=0 items=0 ppid=2934 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:07.796085 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:07.796985 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:07.800068 kernel: audit: type=1300 audit(1765890247.791:242): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda157e1f0 a2=420 a3=0 items=0 ppid=2934 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:07.800130 kernel: audit: type=1327 audit(1765890247.791:242): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:07.791000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:04:07.802612 kernel: audit: type=1130 audit(1765890247.796:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.802885 sudo[2930]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:07.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.802000 audit[2930]: USER_END pid=2930 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.808649 kernel: audit: type=1131 audit(1765890247.796:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.808736 kernel: audit: type=1106 audit(1765890247.802:245): pid=2930 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.808764 kernel: audit: type=1104 audit(1765890247.802:246): pid=2930 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.802000 audit[2930]: CRED_DISP pid=2930 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.899234 sshd[2929]: Connection closed by 10.200.16.10 port 44222 Dec 16 13:04:07.899720 sshd-session[2914]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:07.900000 audit[2914]: USER_END pid=2914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:07.907665 kernel: audit: type=1106 audit(1765890247.900:247): pid=2914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:07.907719 kernel: audit: type=1104 audit(1765890247.900:248): pid=2914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:07.900000 audit[2914]: CRED_DISP pid=2914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:07.907860 systemd[1]: sshd@5-10.200.4.31:22-10.200.16.10:44222.service: Deactivated successfully. Dec 16 13:04:07.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.31:22-10.200.16.10:44222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.912616 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:04:07.912869 kernel: audit: type=1131 audit(1765890247.907:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.31:22-10.200.16.10:44222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:07.914606 systemd-logind[2482]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:04:07.915466 systemd-logind[2482]: Removed session 8. Dec 16 13:04:08.010696 systemd[1]: Started sshd@6-10.200.4.31:22-10.200.16.10:44236.service - OpenSSH per-connection server daemon (10.200.16.10:44236). Dec 16 13:04:08.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.31:22-10.200.16.10:44236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.536000 audit[2962]: USER_ACCT pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.537907 sshd[2962]: Accepted publickey for core from 10.200.16.10 port 44236 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:04:08.537000 audit[2962]: CRED_ACQ pid=2962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.537000 audit[2962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde299a370 a2=3 a3=0 items=0 ppid=1 pid=2962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:08.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:04:08.538996 sshd-session[2962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:08.543753 systemd-logind[2482]: New session 9 of user core. Dec 16 13:04:08.554041 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:04:08.555000 audit[2962]: USER_START pid=2962 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.556000 audit[2965]: CRED_ACQ pid=2965 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:04:08.734000 audit[2966]: USER_ACCT pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.735294 sudo[2966]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:04:08.734000 audit[2966]: CRED_REFR pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:08.735527 sudo[2966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.736000 audit[2966]: USER_START pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:04:11.582772 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:04:11.596080 (dockerd)[2984]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:04:13.388188 dockerd[2984]: time="2025-12-16T13:04:13.388131947Z" level=info msg="Starting up" Dec 16 13:04:13.391565 dockerd[2984]: time="2025-12-16T13:04:13.391529195Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:04:13.400941 dockerd[2984]: time="2025-12-16T13:04:13.400906298Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:04:13.433629 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4275691910-merged.mount: Deactivated successfully. Dec 16 13:04:13.666060 dockerd[2984]: time="2025-12-16T13:04:13.666024995Z" level=info msg="Loading containers: start." Dec 16 13:04:13.706934 kernel: Initializing XFRM netlink socket Dec 16 13:04:13.753944 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 16 13:04:13.754045 kernel: audit: type=1325 audit(1765890253.751:259): table=nat:5 family=2 entries=2 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.751000 audit[3030]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.762715 kernel: audit: type=1300 audit(1765890253.751:259): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb7aefff0 a2=0 a3=0 items=0 ppid=2984 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.751000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb7aefff0 a2=0 a3=0 items=0 ppid=2984 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:13.765890 kernel: audit: type=1327 audit(1765890253.751:259): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:13.759000 audit[3032]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.759000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0c1f3190 a2=0 a3=0 items=0 ppid=2984 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.772434 kernel: audit: type=1325 audit(1765890253.759:260): table=filter:6 family=2 entries=2 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.772495 kernel: audit: type=1300 audit(1765890253.759:260): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe0c1f3190 a2=0 a3=0 items=0 ppid=2984 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:13.775929 kernel: audit: type=1327 audit(1765890253.759:260): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:13.761000 audit[3034]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.779869 kernel: audit: type=1325 audit(1765890253.761:261): table=filter:7 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.761000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed5b5c6d0 a2=0 a3=0 items=0 ppid=2984 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.783401 kernel: audit: type=1300 audit(1765890253.761:261): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed5b5c6d0 a2=0 a3=0 items=0 ppid=2984 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:13.787060 kernel: audit: type=1327 audit(1765890253.761:261): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:13.765000 audit[3036]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.790170 kernel: audit: type=1325 audit(1765890253.765:262): table=filter:8 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.765000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8dc92210 a2=0 a3=0 items=0 ppid=2984 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:13.772000 audit[3038]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.772000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff78b07110 a2=0 a3=0 items=0 ppid=2984 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:13.778000 audit[3040]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.778000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff96363420 a2=0 a3=0 items=0 ppid=2984 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.778000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:13.789000 audit[3042]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.789000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdfff80310 a2=0 a3=0 items=0 ppid=2984 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:13.791000 audit[3044]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.791000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe37ac9340 a2=0 a3=0 items=0 ppid=2984 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:13.829000 audit[3047]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.829000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffeaa75e510 a2=0 a3=0 items=0 ppid=2984 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:04:13.831000 audit[3049]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.831000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffca929aea0 a2=0 a3=0 items=0 ppid=2984 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:13.832000 audit[3051]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.832000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd79685490 a2=0 a3=0 items=0 ppid=2984 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:13.834000 audit[3053]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.834000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc2b8c0790 a2=0 a3=0 items=0 ppid=2984 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:13.836000 audit[3055]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.836000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffddc603910 a2=0 a3=0 items=0 ppid=2984 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:13.932000 audit[3085]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.932000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffff50d4c20 a2=0 a3=0 items=0 ppid=2984 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:04:13.935000 audit[3087]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.935000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe13d93370 a2=0 a3=0 items=0 ppid=2984 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:04:13.936000 audit[3089]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.936000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4d0a2680 a2=0 a3=0 items=0 ppid=2984 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:04:13.938000 audit[3091]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.938000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd94b41d00 a2=0 a3=0 items=0 ppid=2984 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:04:13.940000 audit[3093]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.940000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca9d1bbc0 a2=0 a3=0 items=0 ppid=2984 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:04:13.941000 audit[3095]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.941000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8ee542c0 a2=0 a3=0 items=0 ppid=2984 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:13.943000 audit[3097]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.943000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca12bb1c0 a2=0 a3=0 items=0 ppid=2984 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:13.945000 audit[3099]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.945000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcd300be10 a2=0 a3=0 items=0 ppid=2984 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:04:13.947000 audit[3101]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.947000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff790725c0 a2=0 a3=0 items=0 ppid=2984 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:04:13.949000 audit[3103]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.949000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff591da170 a2=0 a3=0 items=0 ppid=2984 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:04:13.951000 audit[3105]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.951000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe1c84a280 a2=0 a3=0 items=0 ppid=2984 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:04:13.953000 audit[3107]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.953000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff30bb5c90 a2=0 a3=0 items=0 ppid=2984 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:04:13.955000 audit[3109]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.955000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffbe947db0 a2=0 a3=0 items=0 ppid=2984 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:04:13.960000 audit[3114]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.960000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd453d4390 a2=0 a3=0 items=0 ppid=2984 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:13.962000 audit[3116]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.962000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc969c5320 a2=0 a3=0 items=0 ppid=2984 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:13.964000 audit[3118]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:13.964000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff3af96970 a2=0 a3=0 items=0 ppid=2984 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:13.965000 audit[3120]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.965000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea6126e90 a2=0 a3=0 items=0 ppid=2984 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:04:13.967000 audit[3122]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.967000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed0443da0 a2=0 a3=0 items=0 ppid=2984 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:04:13.969000 audit[3124]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:13.969000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc8f2a1a0 a2=0 a3=0 items=0 ppid=2984 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:13.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:04:14.020000 audit[3129]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.020000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffb0547060 a2=0 a3=0 items=0 ppid=2984 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:04:14.022000 audit[3131]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.022000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffff32debc0 a2=0 a3=0 items=0 ppid=2984 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.022000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:04:14.030000 audit[3139]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.030000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff35e94970 a2=0 a3=0 items=0 ppid=2984 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.030000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:04:14.035000 audit[3144]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.035000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff0821e490 a2=0 a3=0 items=0 ppid=2984 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:14.037000 audit[3146]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.037000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff8d572dc0 a2=0 a3=0 items=0 ppid=2984 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:04:14.039000 audit[3148]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.039000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca64f01d0 a2=0 a3=0 items=0 ppid=2984 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:04:14.041000 audit[3150]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.041000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd43fd7ac0 a2=0 a3=0 items=0 ppid=2984 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:04:14.043000 audit[3152]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:14.043000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff257f7780 a2=0 a3=0 items=0 ppid=2984 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:14.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:04:14.044644 systemd-networkd[2142]: docker0: Link UP Dec 16 13:04:14.065252 dockerd[2984]: time="2025-12-16T13:04:14.065217820Z" level=info msg="Loading containers: done." Dec 16 13:04:14.158722 dockerd[2984]: time="2025-12-16T13:04:14.158662452Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:04:14.158907 dockerd[2984]: time="2025-12-16T13:04:14.158762275Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:04:14.158907 dockerd[2984]: time="2025-12-16T13:04:14.158877955Z" level=info msg="Initializing buildkit" Dec 16 13:04:14.216538 dockerd[2984]: time="2025-12-16T13:04:14.216430428Z" level=info msg="Completed buildkit initialization" Dec 16 13:04:14.223319 dockerd[2984]: time="2025-12-16T13:04:14.223270615Z" level=info msg="Daemon has completed initialization" Dec 16 13:04:14.224092 dockerd[2984]: time="2025-12-16T13:04:14.223419621Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:04:14.223606 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:04:14.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:14.339103 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:04:15.223432 containerd[2510]: time="2025-12-16T13:04:15.223247775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:04:16.160443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130695081.mount: Deactivated successfully. Dec 16 13:04:17.416609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:04:17.418067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:17.964073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:17.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:17.973063 (kubelet)[3254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:18.006922 kubelet[3254]: E1216 13:04:18.006880 3254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:18.008751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:18.008912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:18.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:18.009294 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.1M memory peak. Dec 16 13:04:23.735963 update_engine[2483]: I20251216 13:04:23.735893 2483 update_attempter.cc:509] Updating boot flags... Dec 16 13:04:28.166619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 13:04:28.168154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:32.266720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:32.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:32.268641 kernel: kauditd_printk_skb: 113 callbacks suppressed Dec 16 13:04:32.268721 kernel: audit: type=1130 audit(1765890272.266:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:32.279074 (kubelet)[3292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:32.311671 kubelet[3292]: E1216 13:04:32.311625 3292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:32.313339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:32.313480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:32.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:32.313866 systemd[1]: kubelet.service: Consumed 131ms CPU time, 110.1M memory peak. Dec 16 13:04:32.317898 kernel: audit: type=1131 audit(1765890272.312:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:33.434107 containerd[2510]: time="2025-12-16T13:04:33.434054547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:33.437276 containerd[2510]: time="2025-12-16T13:04:33.437148186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29138910" Dec 16 13:04:33.441003 containerd[2510]: time="2025-12-16T13:04:33.440977182Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:33.445588 containerd[2510]: time="2025-12-16T13:04:33.445556536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:33.446808 containerd[2510]: time="2025-12-16T13:04:33.446320491Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 18.223028838s" Dec 16 13:04:33.446808 containerd[2510]: time="2025-12-16T13:04:33.446354876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:04:33.447194 containerd[2510]: time="2025-12-16T13:04:33.447164984Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:04:36.101629 containerd[2510]: time="2025-12-16T13:04:36.101579519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:36.104870 containerd[2510]: time="2025-12-16T13:04:36.104814348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 13:04:36.108093 containerd[2510]: time="2025-12-16T13:04:36.108053087Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:36.112513 containerd[2510]: time="2025-12-16T13:04:36.112469977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:36.113399 containerd[2510]: time="2025-12-16T13:04:36.113174187Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.665980171s" Dec 16 13:04:36.113399 containerd[2510]: time="2025-12-16T13:04:36.113204607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:04:36.113615 containerd[2510]: time="2025-12-16T13:04:36.113595789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:04:39.062033 containerd[2510]: time="2025-12-16T13:04:39.061982261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:39.065027 containerd[2510]: time="2025-12-16T13:04:39.064989829Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 13:04:39.069092 containerd[2510]: time="2025-12-16T13:04:39.069053244Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:39.093673 containerd[2510]: time="2025-12-16T13:04:39.093605373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:39.094555 containerd[2510]: time="2025-12-16T13:04:39.094415088Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.980793741s" Dec 16 13:04:39.094555 containerd[2510]: time="2025-12-16T13:04:39.094448328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:04:39.095050 containerd[2510]: time="2025-12-16T13:04:39.095022733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:04:40.054341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045937393.mount: Deactivated successfully. Dec 16 13:04:40.487081 containerd[2510]: time="2025-12-16T13:04:40.487034644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:40.490922 containerd[2510]: time="2025-12-16T13:04:40.490895241Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 13:04:40.503025 containerd[2510]: time="2025-12-16T13:04:40.502972655Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:40.507336 containerd[2510]: time="2025-12-16T13:04:40.507291949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:40.507771 containerd[2510]: time="2025-12-16T13:04:40.507620608Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.41251199s" Dec 16 13:04:40.507771 containerd[2510]: time="2025-12-16T13:04:40.507653521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:04:40.508394 containerd[2510]: time="2025-12-16T13:04:40.508156975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:04:41.152423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835268257.mount: Deactivated successfully. Dec 16 13:04:42.142319 containerd[2510]: time="2025-12-16T13:04:42.142268332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:42.148271 containerd[2510]: time="2025-12-16T13:04:42.148117675Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128770" Dec 16 13:04:42.151636 containerd[2510]: time="2025-12-16T13:04:42.151609931Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:42.156000 containerd[2510]: time="2025-12-16T13:04:42.155969684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:42.157122 containerd[2510]: time="2025-12-16T13:04:42.156642184Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.648458526s" Dec 16 13:04:42.157122 containerd[2510]: time="2025-12-16T13:04:42.156671644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:04:42.157378 containerd[2510]: time="2025-12-16T13:04:42.157357277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:04:42.416733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 13:04:42.418335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:42.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:42.913976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:42.919866 kernel: audit: type=1130 audit(1765890282.913:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:42.920379 (kubelet)[3380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:42.955218 kubelet[3380]: E1216 13:04:42.955175 3380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:42.957100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:42.957231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:42.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:42.957686 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.6M memory peak. Dec 16 13:04:42.963009 kernel: audit: type=1131 audit(1765890282.956:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:43.354382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341209468.mount: Deactivated successfully. Dec 16 13:04:43.376743 containerd[2510]: time="2025-12-16T13:04:43.376690823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:43.379778 containerd[2510]: time="2025-12-16T13:04:43.379596672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=317462" Dec 16 13:04:43.382932 containerd[2510]: time="2025-12-16T13:04:43.382909342Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:43.387425 containerd[2510]: time="2025-12-16T13:04:43.387396105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:43.387967 containerd[2510]: time="2025-12-16T13:04:43.387801306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.230364509s" Dec 16 13:04:43.387967 containerd[2510]: time="2025-12-16T13:04:43.387829684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:04:43.388429 containerd[2510]: time="2025-12-16T13:04:43.388405824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:04:43.955450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842133072.mount: Deactivated successfully. Dec 16 13:04:45.809938 containerd[2510]: time="2025-12-16T13:04:45.809887219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:45.813389 containerd[2510]: time="2025-12-16T13:04:45.813217818Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=47284200" Dec 16 13:04:45.816584 containerd[2510]: time="2025-12-16T13:04:45.816557956Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:45.821020 containerd[2510]: time="2025-12-16T13:04:45.820988876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:45.821855 containerd[2510]: time="2025-12-16T13:04:45.821745028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.433310941s" Dec 16 13:04:45.821855 containerd[2510]: time="2025-12-16T13:04:45.821774009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:04:48.311527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:48.311707 systemd[1]: kubelet.service: Consumed 132ms CPU time, 110.6M memory peak. Dec 16 13:04:48.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:48.314155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:48.319860 kernel: audit: type=1130 audit(1765890288.310:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:48.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:48.327866 kernel: audit: type=1131 audit(1765890288.310:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:48.344710 systemd[1]: Reload requested from client PID 3475 ('systemctl') (unit session-9.scope)... Dec 16 13:04:48.344825 systemd[1]: Reloading... Dec 16 13:04:48.434028 zram_generator::config[3525]: No configuration found. Dec 16 13:04:48.621875 systemd[1]: Reloading finished in 276 ms. Dec 16 13:04:48.940561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:04:48.940658 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:04:48.941060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:48.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:48.944807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:48.945918 kernel: audit: type=1130 audit(1765890288.940:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:04:48.945000 audit: BPF prog-id=87 op=LOAD Dec 16 13:04:48.949147 kernel: audit: type=1334 audit(1765890288.945:309): prog-id=87 op=LOAD Dec 16 13:04:48.949212 kernel: audit: type=1334 audit(1765890288.945:310): prog-id=88 op=LOAD Dec 16 13:04:48.945000 audit: BPF prog-id=88 op=LOAD Dec 16 13:04:48.950640 kernel: audit: type=1334 audit(1765890288.945:311): prog-id=84 op=UNLOAD Dec 16 13:04:48.945000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:04:48.952304 kernel: audit: type=1334 audit(1765890288.945:312): prog-id=85 op=UNLOAD Dec 16 13:04:48.945000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:04:48.953791 kernel: audit: type=1334 audit(1765890288.949:313): prog-id=89 op=LOAD Dec 16 13:04:48.949000 audit: BPF prog-id=89 op=LOAD Dec 16 13:04:48.949000 audit: BPF prog-id=86 op=UNLOAD Dec 16 13:04:48.956909 kernel: audit: type=1334 audit(1765890288.949:314): prog-id=86 op=UNLOAD Dec 16 13:04:48.956963 kernel: audit: type=1334 audit(1765890288.953:315): prog-id=90 op=LOAD Dec 16 13:04:48.953000 audit: BPF prog-id=90 op=LOAD Dec 16 13:04:48.953000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:04:48.953000 audit: BPF prog-id=91 op=LOAD Dec 16 13:04:48.953000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:04:48.955000 audit: BPF prog-id=92 op=LOAD Dec 16 13:04:48.956000 audit: BPF prog-id=93 op=LOAD Dec 16 13:04:48.956000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:04:48.956000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:04:48.956000 audit: BPF prog-id=94 op=LOAD Dec 16 13:04:48.956000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:04:48.956000 audit: BPF prog-id=95 op=LOAD Dec 16 13:04:48.956000 audit: BPF prog-id=96 op=LOAD Dec 16 13:04:48.956000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:04:48.956000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:04:48.964000 audit: BPF prog-id=97 op=LOAD Dec 16 13:04:48.964000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:04:48.964000 audit: BPF prog-id=98 op=LOAD Dec 16 13:04:48.964000 audit: BPF prog-id=99 op=LOAD Dec 16 13:04:48.964000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:04:48.964000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:04:48.966000 audit: BPF prog-id=100 op=LOAD Dec 16 13:04:48.966000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:04:48.966000 audit: BPF prog-id=101 op=LOAD Dec 16 13:04:48.966000 audit: BPF prog-id=102 op=LOAD Dec 16 13:04:48.966000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:04:48.966000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:04:48.967000 audit: BPF prog-id=103 op=LOAD Dec 16 13:04:48.967000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:04:48.967000 audit: BPF prog-id=104 op=LOAD Dec 16 13:04:48.967000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:04:48.967000 audit: BPF prog-id=105 op=LOAD Dec 16 13:04:48.967000 audit: BPF prog-id=106 op=LOAD Dec 16 13:04:48.967000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:04:48.967000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:04:54.405220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:54.412906 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 13:04:54.413004 kernel: audit: type=1130 audit(1765890294.405:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:54.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:04:54.419103 (kubelet)[3592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:04:54.451930 kubelet[3592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:54.451930 kubelet[3592]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:04:54.451930 kubelet[3592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:54.452223 kubelet[3592]: I1216 13:04:54.451973 3592 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:04:54.859479 kubelet[3592]: I1216 13:04:54.859441 3592 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:04:54.859479 kubelet[3592]: I1216 13:04:54.859468 3592 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:04:54.859707 kubelet[3592]: I1216 13:04:54.859694 3592 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:04:54.889818 kubelet[3592]: E1216 13:04:54.889777 3592 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:04:54.891352 kubelet[3592]: I1216 13:04:54.891330 3592 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:54.897279 kubelet[3592]: I1216 13:04:54.897265 3592 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:04:54.900529 kubelet[3592]: I1216 13:04:54.900508 3592 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:04:54.900710 kubelet[3592]: I1216 13:04:54.900688 3592 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:04:54.900877 kubelet[3592]: I1216 13:04:54.900710 3592 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-968fde264e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:04:54.901000 kubelet[3592]: I1216 13:04:54.900881 3592 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:04:54.901000 kubelet[3592]: I1216 13:04:54.900890 3592 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:04:54.901000 kubelet[3592]: I1216 13:04:54.900992 3592 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:54.903315 kubelet[3592]: I1216 13:04:54.903299 3592 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:04:54.903315 kubelet[3592]: I1216 13:04:54.903316 3592 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:04:54.903393 kubelet[3592]: I1216 13:04:54.903339 3592 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:04:54.903393 kubelet[3592]: I1216 13:04:54.903354 3592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:04:54.908158 kubelet[3592]: I1216 13:04:54.908130 3592 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:04:54.908636 kubelet[3592]: I1216 13:04:54.908619 3592 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:04:54.909280 kubelet[3592]: W1216 13:04:54.909259 3592 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:04:54.911598 kubelet[3592]: I1216 13:04:54.911584 3592 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:04:54.911714 kubelet[3592]: I1216 13:04:54.911707 3592 server.go:1289] "Started kubelet" Dec 16 13:04:54.918060 kubelet[3592]: E1216 13:04:54.918035 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:54.919409 kubelet[3592]: E1216 13:04:54.918097 3592 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-968fde264e.1881b3ddfc85296a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-968fde264e,UID:ci-4515.1.0-a-968fde264e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-968fde264e,},FirstTimestamp:2025-12-16 13:04:54.91167473 +0000 UTC m=+0.488985320,LastTimestamp:2025-12-16 13:04:54.91167473 +0000 UTC m=+0.488985320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-968fde264e,}" Dec 16 13:04:54.919621 kubelet[3592]: E1216 13:04:54.919587 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-968fde264e&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:04:54.920743 kubelet[3592]: I1216 13:04:54.920551 3592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:04:54.920743 kubelet[3592]: I1216 13:04:54.920554 3592 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:04:54.922260 kubelet[3592]: I1216 13:04:54.922205 3592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:04:54.922571 kubelet[3592]: I1216 13:04:54.922553 3592 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:04:54.925702 kubelet[3592]: I1216 13:04:54.925685 3592 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:04:54.925000 audit[3607]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.929999 kubelet[3592]: I1216 13:04:54.926498 3592 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:04:54.935387 kernel: audit: type=1325 audit(1765890294.925:350): table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.935458 kernel: audit: type=1300 audit(1765890294.925:350): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc23bea9c0 a2=0 a3=0 items=0 ppid=3592 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.925000 audit[3607]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc23bea9c0 a2=0 a3=0 items=0 ppid=3592 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.935581 kubelet[3592]: I1216 13:04:54.935520 3592 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:04:54.938765 kernel: audit: type=1327 audit(1765890294.925:350): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:54.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:54.939193 kubelet[3592]: E1216 13:04:54.935747 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:54.939193 kubelet[3592]: I1216 13:04:54.938305 3592 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:04:54.939193 kubelet[3592]: I1216 13:04:54.938365 3592 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:04:54.939952 kubelet[3592]: E1216 13:04:54.939619 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:04:54.939952 kubelet[3592]: E1216 13:04:54.939704 3592 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-968fde264e?timeout=10s\": dial tcp 10.200.4.31:6443: connect: connection refused" interval="200ms" Dec 16 13:04:54.939000 audit[3608]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.949041 kernel: audit: type=1325 audit(1765890294.939:351): table=filter:46 family=2 entries=1 op=nft_register_chain pid=3608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.949123 kernel: audit: type=1300 audit(1765890294.939:351): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4433b4b0 a2=0 a3=0 items=0 ppid=3592 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.939000 audit[3608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4433b4b0 a2=0 a3=0 items=0 ppid=3592 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.952026 kernel: audit: type=1327 audit(1765890294.939:351): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:04:54.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:04:54.955704 kernel: audit: type=1325 audit(1765890294.951:352): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.951000 audit[3610]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.951000 audit[3610]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe0b83d570 a2=0 a3=0 items=0 ppid=3592 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.961338 kubelet[3592]: I1216 13:04:54.959258 3592 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:04:54.961338 kubelet[3592]: I1216 13:04:54.959508 3592 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:04:54.964365 kernel: audit: type=1300 audit(1765890294.951:352): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe0b83d570 a2=0 a3=0 items=0 ppid=3592 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.964427 kernel: audit: type=1327 audit(1765890294.951:352): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:54.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:54.964488 kubelet[3592]: E1216 13:04:54.963522 3592 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:04:54.963000 audit[3612]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:54.963000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff1027ddf0 a2=0 a3=0 items=0 ppid=3592 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:54.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:04:54.965624 kubelet[3592]: I1216 13:04:54.965223 3592 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:04:54.975821 kubelet[3592]: I1216 13:04:54.975808 3592 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:04:54.975898 kubelet[3592]: I1216 13:04:54.975870 3592 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:04:54.975898 kubelet[3592]: I1216 13:04:54.975887 3592 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:55.036827 kubelet[3592]: E1216 13:04:55.036775 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:55.056000 audit[3619]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:55.056000 audit[3619]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff4549ea70 a2=0 a3=0 items=0 ppid=3592 pid=3619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 13:04:55.058186 kubelet[3592]: I1216 13:04:55.058128 3592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:04:55.058000 audit[3620]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:55.058000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8d63b560 a2=0 a3=0 items=0 ppid=3592 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:04:55.059773 kubelet[3592]: I1216 13:04:55.059501 3592 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:04:55.059773 kubelet[3592]: I1216 13:04:55.059523 3592 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:04:55.059773 kubelet[3592]: I1216 13:04:55.059547 3592 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:04:55.059773 kubelet[3592]: I1216 13:04:55.059554 3592 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:04:55.060676 kubelet[3592]: E1216 13:04:55.059956 3592 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:04:55.060676 kubelet[3592]: E1216 13:04:55.060420 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:04:55.060000 audit[3621]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:55.060000 audit[3623]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:55.060000 audit[3623]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff61af2db0 a2=0 a3=0 items=0 ppid=3592 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:04:55.060000 audit[3621]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc24d7e1c0 a2=0 a3=0 items=0 ppid=3592 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:04:55.062000 audit[3625]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=3625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:55.062000 audit[3625]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7c72bc10 a2=0 a3=0 items=0 ppid=3592 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:04:55.062000 audit[3626]: NETFILTER_CFG table=nat:54 family=2 entries=1 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:55.062000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa572c6f0 a2=0 a3=0 items=0 ppid=3592 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:04:55.063000 audit[3627]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:04:55.063000 audit[3627]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc51178f70 a2=0 a3=0 items=0 ppid=3592 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.063000 audit[3628]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:04:55.063000 audit[3628]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf64a6890 a2=0 a3=0 items=0 ppid=3592 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:55.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:04:55.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:04:55.137111 kubelet[3592]: E1216 13:04:55.136990 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:55.140563 kubelet[3592]: E1216 13:04:55.140535 3592 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-968fde264e?timeout=10s\": dial tcp 10.200.4.31:6443: connect: connection refused" interval="400ms" Dec 16 13:04:55.160777 kubelet[3592]: E1216 13:04:55.160746 3592 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:55.238118 kubelet[3592]: E1216 13:04:55.238061 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:55.292472 kubelet[3592]: I1216 13:04:55.292450 3592 policy_none.go:49] "None policy: Start" Dec 16 13:04:55.292472 kubelet[3592]: I1216 13:04:55.292478 3592 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:04:55.292574 kubelet[3592]: I1216 13:04:55.292492 3592 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:04:55.338551 kubelet[3592]: E1216 13:04:55.338518 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:55.347303 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:04:55.360998 kubelet[3592]: E1216 13:04:55.360976 3592 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:55.361181 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:04:55.372988 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:04:55.374366 kubelet[3592]: E1216 13:04:55.374348 3592 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:04:55.375021 kubelet[3592]: I1216 13:04:55.374524 3592 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:04:55.375021 kubelet[3592]: I1216 13:04:55.374538 3592 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:04:55.375021 kubelet[3592]: I1216 13:04:55.374726 3592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:04:55.376276 kubelet[3592]: E1216 13:04:55.376254 3592 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:04:55.376376 kubelet[3592]: E1216 13:04:55.376294 3592 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:04:55.476744 kubelet[3592]: I1216 13:04:55.476717 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:55.477099 kubelet[3592]: E1216 13:04:55.477075 3592 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.31:6443/api/v1/nodes\": dial tcp 10.200.4.31:6443: connect: connection refused" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:55.541817 kubelet[3592]: E1216 13:04:55.541777 3592 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-968fde264e?timeout=10s\": dial tcp 10.200.4.31:6443: connect: connection refused" interval="800ms" Dec 16 13:04:55.678947 kubelet[3592]: I1216 13:04:55.678910 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:55.679282 kubelet[3592]: E1216 13:04:55.679261 3592 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.31:6443/api/v1/nodes\": dial tcp 10.200.4.31:6443: connect: connection refused" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:55.843424 kubelet[3592]: I1216 13:04:55.843291 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7291f6e8a5349c84efd675014d304c91-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-968fde264e\" (UID: \"7291f6e8a5349c84efd675014d304c91\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" Dec 16 13:04:55.855820 kubelet[3592]: E1216 13:04:55.855792 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:55.884433 kubelet[3592]: E1216 13:04:55.884409 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:04:55.980294 kubelet[3592]: E1216 13:04:55.980254 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-968fde264e&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:04:56.081235 kubelet[3592]: I1216 13:04:56.081201 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.081601 kubelet[3592]: E1216 13:04:56.081567 3592 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.31:6443/api/v1/nodes\": dial tcp 10.200.4.31:6443: connect: connection refused" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.189623 kubelet[3592]: E1216 13:04:56.189587 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:04:56.343005 kubelet[3592]: E1216 13:04:56.342958 3592 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-968fde264e?timeout=10s\": dial tcp 10.200.4.31:6443: connect: connection refused" interval="1.6s" Dec 16 13:04:56.702304 systemd[1]: Created slice kubepods-burstable-pod7291f6e8a5349c84efd675014d304c91.slice - libcontainer container kubepods-burstable-pod7291f6e8a5349c84efd675014d304c91.slice. Dec 16 13:04:56.712343 kubelet[3592]: E1216 13:04:56.712312 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.713246 containerd[2510]: time="2025-12-16T13:04:56.713207769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-968fde264e,Uid:7291f6e8a5349c84efd675014d304c91,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:56.747593 kubelet[3592]: I1216 13:04:56.747533 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.747703 kubelet[3592]: I1216 13:04:56.747606 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.747703 kubelet[3592]: I1216 13:04:56.747630 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.845652 systemd[1]: Created slice kubepods-burstable-podc8ca13fabe96b19e007c9fbcf6409dc9.slice - libcontainer container kubepods-burstable-podc8ca13fabe96b19e007c9fbcf6409dc9.slice. Dec 16 13:04:56.847514 kubelet[3592]: E1216 13:04:56.847483 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.883971 kubelet[3592]: I1216 13:04:56.883948 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.884356 kubelet[3592]: E1216 13:04:56.884336 3592 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.31:6443/api/v1/nodes\": dial tcp 10.200.4.31:6443: connect: connection refused" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.948596 kubelet[3592]: I1216 13:04:56.948562 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.948596 kubelet[3592]: I1216 13:04:56.948598 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.980875 kubelet[3592]: I1216 13:04:56.948621 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.980875 kubelet[3592]: I1216 13:04:56.948637 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.980875 kubelet[3592]: I1216 13:04:56.948655 3592 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:04:56.982725 kubelet[3592]: E1216 13:04:56.982693 3592 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:04:57.054290 systemd[1]: Created slice kubepods-burstable-pod0e12d78fc276679233c1e13f0da02538.slice - libcontainer container kubepods-burstable-pod0e12d78fc276679233c1e13f0da02538.slice. Dec 16 13:04:57.056333 kubelet[3592]: E1216 13:04:57.056296 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:57.057005 containerd[2510]: time="2025-12-16T13:04:57.056974887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-968fde264e,Uid:0e12d78fc276679233c1e13f0da02538,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:57.148764 containerd[2510]: time="2025-12-16T13:04:57.148723069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-968fde264e,Uid:c8ca13fabe96b19e007c9fbcf6409dc9,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:57.831162 kubelet[3592]: E1216 13:04:57.831118 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:04:57.944018 kubelet[3592]: E1216 13:04:57.943964 3592 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-968fde264e?timeout=10s\": dial tcp 10.200.4.31:6443: connect: connection refused" interval="3.2s" Dec 16 13:04:58.474987 kubelet[3592]: E1216 13:04:58.474942 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:58.486606 kubelet[3592]: I1216 13:04:58.486581 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:58.486890 kubelet[3592]: E1216 13:04:58.486833 3592 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.31:6443/api/v1/nodes\": dial tcp 10.200.4.31:6443: connect: connection refused" node="ci-4515.1.0-a-968fde264e" Dec 16 13:04:58.511263 kubelet[3592]: E1216 13:04:58.511238 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-968fde264e&limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:04:58.657905 containerd[2510]: time="2025-12-16T13:04:58.657834462Z" level=info msg="connecting to shim c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb" address="unix:///run/containerd/s/f9651e611fd57af038c5d0f72f0fcf72448e0878362eae4ae31ddd5135be1eb2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:58.679102 systemd[1]: Started cri-containerd-c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb.scope - libcontainer container c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb. Dec 16 13:04:58.687000 audit: BPF prog-id=107 op=LOAD Dec 16 13:04:58.687000 audit: BPF prog-id=108 op=LOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=109 op=LOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=110 op=LOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=110 op=UNLOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.687000 audit: BPF prog-id=111 op=LOAD Dec 16 13:04:58.687000 audit[3649]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3637 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:58.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339633438343133306533626531323038326266396237316466393432 Dec 16 13:04:58.847647 containerd[2510]: time="2025-12-16T13:04:58.847527135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-968fde264e,Uid:7291f6e8a5349c84efd675014d304c91,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb\"" Dec 16 13:04:58.995124 containerd[2510]: time="2025-12-16T13:04:58.995015145Z" level=info msg="CreateContainer within sandbox \"c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:04:59.001217 containerd[2510]: time="2025-12-16T13:04:59.000608932Z" level=info msg="connecting to shim c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2" address="unix:///run/containerd/s/2d0cc704e6ad4da72e7c4503d6bade7f8eefc86e9598ca7e247663fa5ad20974" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:59.056040 systemd[1]: Started cri-containerd-c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2.scope - libcontainer container c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2. Dec 16 13:04:59.064000 audit: BPF prog-id=112 op=LOAD Dec 16 13:04:59.064000 audit: BPF prog-id=113 op=LOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=114 op=LOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=115 op=LOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=115 op=UNLOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.064000 audit: BPF prog-id=116 op=LOAD Dec 16 13:04:59.064000 audit[3695]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3684 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337613632623665356232303231333263336165386464646532323334 Dec 16 13:04:59.251093 containerd[2510]: time="2025-12-16T13:04:59.251032965Z" level=info msg="connecting to shim 80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5" address="unix:///run/containerd/s/0a0c6b25f860846b2eb213baee37f36c5bef5ec553bf2870d78f64a063d903d7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:59.251536 containerd[2510]: time="2025-12-16T13:04:59.251444090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-968fde264e,Uid:0e12d78fc276679233c1e13f0da02538,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2\"" Dec 16 13:04:59.267530 kubelet[3592]: E1216 13:04:59.267502 3592 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:04:59.274005 systemd[1]: Started cri-containerd-80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5.scope - libcontainer container 80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5. Dec 16 13:04:59.282000 audit: BPF prog-id=117 op=LOAD Dec 16 13:04:59.282000 audit: BPF prog-id=118 op=LOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=118 op=UNLOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=119 op=LOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=120 op=LOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=120 op=UNLOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=119 op=UNLOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.282000 audit: BPF prog-id=121 op=LOAD Dec 16 13:04:59.282000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3730 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830653434663638363466366262323162326330636430363034343936 Dec 16 13:04:59.340978 containerd[2510]: time="2025-12-16T13:04:59.340938347Z" level=info msg="Container 8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:59.342067 containerd[2510]: time="2025-12-16T13:04:59.342040001Z" level=info msg="CreateContainer within sandbox \"c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:04:59.388072 containerd[2510]: time="2025-12-16T13:04:59.387965701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-968fde264e,Uid:c8ca13fabe96b19e007c9fbcf6409dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5\"" Dec 16 13:04:59.495048 containerd[2510]: time="2025-12-16T13:04:59.495009101Z" level=info msg="CreateContainer within sandbox \"80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:04:59.737040 containerd[2510]: time="2025-12-16T13:04:59.737000607Z" level=info msg="CreateContainer within sandbox \"c9c484130e3be12082bf9b71df9424289ebe53a3f82cc51dabaeebff6244b6fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2\"" Dec 16 13:04:59.737675 containerd[2510]: time="2025-12-16T13:04:59.737640024Z" level=info msg="StartContainer for \"8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2\"" Dec 16 13:04:59.738796 containerd[2510]: time="2025-12-16T13:04:59.738755345Z" level=info msg="connecting to shim 8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2" address="unix:///run/containerd/s/f9651e611fd57af038c5d0f72f0fcf72448e0878362eae4ae31ddd5135be1eb2" protocol=ttrpc version=3 Dec 16 13:04:59.763031 systemd[1]: Started cri-containerd-8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2.scope - libcontainer container 8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2. Dec 16 13:04:59.771000 audit: BPF prog-id=122 op=LOAD Dec 16 13:04:59.773600 kernel: kauditd_printk_skb: 93 callbacks suppressed Dec 16 13:04:59.773661 kernel: audit: type=1334 audit(1765890299.771:386): prog-id=122 op=LOAD Dec 16 13:04:59.776906 kernel: audit: type=1334 audit(1765890299.772:387): prog-id=123 op=LOAD Dec 16 13:04:59.776976 kernel: audit: type=1300 audit(1765890299.772:387): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.772000 audit: BPF prog-id=123 op=LOAD Dec 16 13:04:59.772000 audit[3772]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.787142 kernel: audit: type=1327 audit(1765890299.772:387): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.788711 kernel: audit: type=1334 audit(1765890299.772:388): prog-id=123 op=UNLOAD Dec 16 13:04:59.772000 audit: BPF prog-id=123 op=UNLOAD Dec 16 13:04:59.793519 kernel: audit: type=1300 audit(1765890299.772:388): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.772000 audit[3772]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.798442 kernel: audit: type=1327 audit(1765890299.772:388): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.803570 kernel: audit: type=1334 audit(1765890299.773:389): prog-id=124 op=LOAD Dec 16 13:04:59.773000 audit: BPF prog-id=124 op=LOAD Dec 16 13:04:59.808768 kernel: audit: type=1300 audit(1765890299.773:389): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit[3772]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.814861 kernel: audit: type=1327 audit(1765890299.773:389): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.815332 containerd[2510]: time="2025-12-16T13:04:59.815293826Z" level=info msg="Container 1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:59.773000 audit: BPF prog-id=125 op=LOAD Dec 16 13:04:59.773000 audit[3772]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.773000 audit: BPF prog-id=125 op=UNLOAD Dec 16 13:04:59.773000 audit[3772]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.773000 audit: BPF prog-id=124 op=UNLOAD Dec 16 13:04:59.773000 audit[3772]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.773000 audit: BPF prog-id=126 op=LOAD Dec 16 13:04:59.773000 audit[3772]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3637 pid=3772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:04:59.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863666234393638666530636133366333323630653266363962313365 Dec 16 13:04:59.857227 containerd[2510]: time="2025-12-16T13:04:59.857090773Z" level=info msg="StartContainer for \"8cfb4968fe0ca36c3260e2f69b13e940aed58cd21bd68f63317ba5b4d9f60fe2\" returns successfully" Dec 16 13:05:00.038339 containerd[2510]: time="2025-12-16T13:05:00.038227073Z" level=info msg="CreateContainer within sandbox \"c7a62b6e5b202132c3ae8ddde2234e460d3c2f7430cedd43e4dad5ecd36cf6e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2\"" Dec 16 13:05:00.040490 containerd[2510]: time="2025-12-16T13:05:00.040425284Z" level=info msg="StartContainer for \"1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2\"" Dec 16 13:05:00.041810 containerd[2510]: time="2025-12-16T13:05:00.041764065Z" level=info msg="connecting to shim 1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2" address="unix:///run/containerd/s/2d0cc704e6ad4da72e7c4503d6bade7f8eefc86e9598ca7e247663fa5ad20974" protocol=ttrpc version=3 Dec 16 13:05:00.050646 containerd[2510]: time="2025-12-16T13:05:00.050367229Z" level=info msg="Container f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:00.073129 systemd[1]: Started cri-containerd-1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2.scope - libcontainer container 1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2. Dec 16 13:05:00.091457 kubelet[3592]: E1216 13:05:00.091430 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:00.097000 audit: BPF prog-id=127 op=LOAD Dec 16 13:05:00.098000 audit: BPF prog-id=128 op=LOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=128 op=UNLOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=129 op=LOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=130 op=LOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=130 op=UNLOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=129 op=UNLOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.098000 audit: BPF prog-id=131 op=LOAD Dec 16 13:05:00.098000 audit[3807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3684 pid=3807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166336132313834616333393132633161313630626435393461656433 Dec 16 13:05:00.185161 containerd[2510]: time="2025-12-16T13:05:00.185126619Z" level=info msg="StartContainer for \"1f3a2184ac3912c1a160bd594aed3cfdb7f6eadd68a46c422b00dd8914c5f2c2\" returns successfully" Dec 16 13:05:00.197106 containerd[2510]: time="2025-12-16T13:05:00.197016389Z" level=info msg="CreateContainer within sandbox \"80e44f6864f6bb21b2c0cd0604496f53ad4b6ad09cbf6e9572c16fe753f236a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f\"" Dec 16 13:05:00.197512 containerd[2510]: time="2025-12-16T13:05:00.197490291Z" level=info msg="StartContainer for \"f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f\"" Dec 16 13:05:00.199559 containerd[2510]: time="2025-12-16T13:05:00.199535038Z" level=info msg="connecting to shim f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f" address="unix:///run/containerd/s/0a0c6b25f860846b2eb213baee37f36c5bef5ec553bf2870d78f64a063d903d7" protocol=ttrpc version=3 Dec 16 13:05:00.225048 systemd[1]: Started cri-containerd-f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f.scope - libcontainer container f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f. Dec 16 13:05:00.253000 audit: BPF prog-id=132 op=LOAD Dec 16 13:05:00.255000 audit: BPF prog-id=133 op=LOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=133 op=UNLOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=134 op=LOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=135 op=LOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=135 op=UNLOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=134 op=UNLOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.255000 audit: BPF prog-id=136 op=LOAD Dec 16 13:05:00.255000 audit[3838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3730 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:00.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326639313766396430356664373132363161326264366462633636 Dec 16 13:05:00.345990 containerd[2510]: time="2025-12-16T13:05:00.345871156Z" level=info msg="StartContainer for \"f72f917f9d05fd71261a2bd6dbc66c95ae71bd5084b812c1e06d6adf09036e4f\" returns successfully" Dec 16 13:05:01.096768 kubelet[3592]: E1216 13:05:01.096732 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:01.098813 kubelet[3592]: E1216 13:05:01.098790 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:01.100279 kubelet[3592]: E1216 13:05:01.100256 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:01.688888 kubelet[3592]: I1216 13:05:01.688820 3592 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:01.988043 kubelet[3592]: E1216 13:05:01.987650 3592 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:02.069203 kubelet[3592]: E1216 13:05:02.069112 3592 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4515.1.0-a-968fde264e.1881b3ddfc85296a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-968fde264e,UID:ci-4515.1.0-a-968fde264e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-968fde264e,},FirstTimestamp:2025-12-16 13:04:54.91167473 +0000 UTC m=+0.488985320,LastTimestamp:2025-12-16 13:04:54.91167473 +0000 UTC m=+0.488985320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-968fde264e,}" Dec 16 13:05:02.100162 kubelet[3592]: E1216 13:05:02.100131 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:02.100544 kubelet[3592]: E1216 13:05:02.100522 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:02.101425 kubelet[3592]: E1216 13:05:02.101405 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:02.131579 kubelet[3592]: I1216 13:05:02.131553 3592 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:02.131579 kubelet[3592]: E1216 13:05:02.131579 3592 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-968fde264e\": node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.157031 kubelet[3592]: E1216 13:05:02.156708 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.257384 kubelet[3592]: E1216 13:05:02.257280 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.357995 kubelet[3592]: E1216 13:05:02.357955 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.459021 kubelet[3592]: E1216 13:05:02.458983 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.559629 kubelet[3592]: E1216 13:05:02.559523 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.660152 kubelet[3592]: E1216 13:05:02.660114 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.760908 kubelet[3592]: E1216 13:05:02.760837 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.861514 kubelet[3592]: E1216 13:05:02.861410 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:02.962747 kubelet[3592]: E1216 13:05:02.962540 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.063540 kubelet[3592]: E1216 13:05:03.063502 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.101865 kubelet[3592]: E1216 13:05:03.101565 3592 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-968fde264e\" not found" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:03.164404 kubelet[3592]: E1216 13:05:03.164295 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.265273 kubelet[3592]: E1216 13:05:03.265232 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.366159 kubelet[3592]: E1216 13:05:03.366114 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.466468 kubelet[3592]: E1216 13:05:03.466432 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.566980 kubelet[3592]: E1216 13:05:03.566946 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.667904 kubelet[3592]: E1216 13:05:03.667857 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.768639 kubelet[3592]: E1216 13:05:03.768526 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.869117 kubelet[3592]: E1216 13:05:03.869073 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:03.970652 kubelet[3592]: E1216 13:05:03.970045 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:04.071268 kubelet[3592]: E1216 13:05:04.071167 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:04.171824 kubelet[3592]: E1216 13:05:04.171788 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:04.272433 kubelet[3592]: E1216 13:05:04.272388 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:04.373185 kubelet[3592]: E1216 13:05:04.373099 3592 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-968fde264e\" not found" Dec 16 13:05:04.436824 kubelet[3592]: I1216 13:05:04.436789 3592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:04.450611 kubelet[3592]: I1216 13:05:04.450578 3592 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:04.450967 kubelet[3592]: I1216 13:05:04.450947 3592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" Dec 16 13:05:04.456945 kubelet[3592]: I1216 13:05:04.456887 3592 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:04.457344 kubelet[3592]: I1216 13:05:04.457099 3592 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:04.463675 kubelet[3592]: I1216 13:05:04.463648 3592 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:04.763767 systemd[1]: Reload requested from client PID 3878 ('systemctl') (unit session-9.scope)... Dec 16 13:05:04.763795 systemd[1]: Reloading... Dec 16 13:05:04.865892 zram_generator::config[3928]: No configuration found. Dec 16 13:05:04.913869 kubelet[3592]: I1216 13:05:04.913721 3592 apiserver.go:52] "Watching apiserver" Dec 16 13:05:04.939004 kubelet[3592]: I1216 13:05:04.938980 3592 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:05.099309 systemd[1]: Reloading finished in 335 ms. Dec 16 13:05:05.104538 kubelet[3592]: I1216 13:05:05.104479 3592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" podStartSLOduration=1.104465731 podStartE2EDuration="1.104465731s" podCreationTimestamp="2025-12-16 13:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:05.103626552 +0000 UTC m=+10.680937140" watchObservedRunningTime="2025-12-16 13:05:05.104465731 +0000 UTC m=+10.681776315" Dec 16 13:05:05.124470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:05.145904 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:05:05.146291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:05.152026 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 16 13:05:05.152078 kernel: audit: type=1131 audit(1765890305.145:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:05.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:05.146356 systemd[1]: kubelet.service: Consumed 866ms CPU time, 132M memory peak. Dec 16 13:05:05.151074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:05.153661 kernel: audit: type=1334 audit(1765890305.151:411): prog-id=137 op=LOAD Dec 16 13:05:05.151000 audit: BPF prog-id=137 op=LOAD Dec 16 13:05:05.155323 kernel: audit: type=1334 audit(1765890305.151:412): prog-id=94 op=UNLOAD Dec 16 13:05:05.151000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:05:05.156681 kernel: audit: type=1334 audit(1765890305.151:413): prog-id=138 op=LOAD Dec 16 13:05:05.151000 audit: BPF prog-id=138 op=LOAD Dec 16 13:05:05.158118 kernel: audit: type=1334 audit(1765890305.151:414): prog-id=139 op=LOAD Dec 16 13:05:05.151000 audit: BPF prog-id=139 op=LOAD Dec 16 13:05:05.159657 kernel: audit: type=1334 audit(1765890305.151:415): prog-id=95 op=UNLOAD Dec 16 13:05:05.151000 audit: BPF prog-id=95 op=UNLOAD Dec 16 13:05:05.161279 kernel: audit: type=1334 audit(1765890305.151:416): prog-id=96 op=UNLOAD Dec 16 13:05:05.151000 audit: BPF prog-id=96 op=UNLOAD Dec 16 13:05:05.162828 kernel: audit: type=1334 audit(1765890305.154:417): prog-id=140 op=LOAD Dec 16 13:05:05.154000 audit: BPF prog-id=140 op=LOAD Dec 16 13:05:05.154000 audit: BPF prog-id=104 op=UNLOAD Dec 16 13:05:05.165574 kernel: audit: type=1334 audit(1765890305.154:418): prog-id=104 op=UNLOAD Dec 16 13:05:05.165626 kernel: audit: type=1334 audit(1765890305.154:419): prog-id=141 op=LOAD Dec 16 13:05:05.154000 audit: BPF prog-id=141 op=LOAD Dec 16 13:05:05.154000 audit: BPF prog-id=142 op=LOAD Dec 16 13:05:05.154000 audit: BPF prog-id=105 op=UNLOAD Dec 16 13:05:05.154000 audit: BPF prog-id=106 op=UNLOAD Dec 16 13:05:05.160000 audit: BPF prog-id=143 op=LOAD Dec 16 13:05:05.160000 audit: BPF prog-id=90 op=UNLOAD Dec 16 13:05:05.162000 audit: BPF prog-id=144 op=LOAD Dec 16 13:05:05.162000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:05:05.162000 audit: BPF prog-id=145 op=LOAD Dec 16 13:05:05.162000 audit: BPF prog-id=146 op=LOAD Dec 16 13:05:05.162000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:05:05.162000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:05:05.165000 audit: BPF prog-id=147 op=LOAD Dec 16 13:05:05.165000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:05:05.165000 audit: BPF prog-id=148 op=LOAD Dec 16 13:05:05.165000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:05:05.176000 audit: BPF prog-id=149 op=LOAD Dec 16 13:05:05.176000 audit: BPF prog-id=91 op=UNLOAD Dec 16 13:05:05.176000 audit: BPF prog-id=150 op=LOAD Dec 16 13:05:05.176000 audit: BPF prog-id=151 op=LOAD Dec 16 13:05:05.176000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:05:05.176000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:05:05.178000 audit: BPF prog-id=152 op=LOAD Dec 16 13:05:05.178000 audit: BPF prog-id=100 op=UNLOAD Dec 16 13:05:05.178000 audit: BPF prog-id=153 op=LOAD Dec 16 13:05:05.178000 audit: BPF prog-id=154 op=LOAD Dec 16 13:05:05.178000 audit: BPF prog-id=101 op=UNLOAD Dec 16 13:05:05.178000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:05:05.178000 audit: BPF prog-id=155 op=LOAD Dec 16 13:05:05.178000 audit: BPF prog-id=156 op=LOAD Dec 16 13:05:05.179000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:05:05.179000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:05:05.696526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:05.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:05.706472 (kubelet)[3994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:05.747347 kubelet[3994]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:05.747347 kubelet[3994]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:05.747347 kubelet[3994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:05.747651 kubelet[3994]: I1216 13:05:05.747506 3994 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:05.756191 kubelet[3994]: I1216 13:05:05.756164 3994 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:05:05.756191 kubelet[3994]: I1216 13:05:05.756190 3994 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:05.756483 kubelet[3994]: I1216 13:05:05.756391 3994 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:05.757625 kubelet[3994]: I1216 13:05:05.757604 3994 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:05:05.759595 kubelet[3994]: I1216 13:05:05.759531 3994 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:05.763031 kubelet[3994]: I1216 13:05:05.763003 3994 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:05.766014 kubelet[3994]: I1216 13:05:05.765990 3994 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:05:05.766156 kubelet[3994]: I1216 13:05:05.766134 3994 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:05.766291 kubelet[3994]: I1216 13:05:05.766153 3994 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-968fde264e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:05.766384 kubelet[3994]: I1216 13:05:05.766295 3994 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:05.766384 kubelet[3994]: I1216 13:05:05.766304 3994 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:05:05.766384 kubelet[3994]: I1216 13:05:05.766342 3994 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:05.766471 kubelet[3994]: I1216 13:05:05.766464 3994 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:05:05.766493 kubelet[3994]: I1216 13:05:05.766475 3994 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:05.766579 kubelet[3994]: I1216 13:05:05.766500 3994 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:05:05.766579 kubelet[3994]: I1216 13:05:05.766516 3994 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:05.770099 kubelet[3994]: I1216 13:05:05.768822 3994 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:05:05.772558 kubelet[3994]: I1216 13:05:05.772538 3994 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:05.788873 kubelet[3994]: I1216 13:05:05.786777 3994 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:05:05.788873 kubelet[3994]: I1216 13:05:05.786817 3994 server.go:1289] "Started kubelet" Dec 16 13:05:05.791424 kubelet[3994]: I1216 13:05:05.791409 3994 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:05.794628 kubelet[3994]: E1216 13:05:05.794606 3994 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:05.797648 kubelet[3994]: I1216 13:05:05.797426 3994 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:05.798298 kubelet[3994]: I1216 13:05:05.798277 3994 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:05:05.805716 kubelet[3994]: I1216 13:05:05.805669 3994 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:05.807289 kubelet[3994]: I1216 13:05:05.807257 3994 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:05:05.807603 kubelet[3994]: I1216 13:05:05.807588 3994 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:05:05.808135 kubelet[3994]: I1216 13:05:05.807696 3994 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:05:05.808135 kubelet[3994]: I1216 13:05:05.808014 3994 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:05.809504 kubelet[3994]: I1216 13:05:05.809484 3994 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:05.811198 kubelet[3994]: I1216 13:05:05.811174 3994 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:05.811560 kubelet[3994]: I1216 13:05:05.811264 3994 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:05.813676 kubelet[3994]: I1216 13:05:05.813658 3994 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:05.816301 kubelet[3994]: I1216 13:05:05.815739 3994 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:05.821631 kubelet[3994]: I1216 13:05:05.821607 3994 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:05.821631 kubelet[3994]: I1216 13:05:05.821632 3994 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:05:05.821725 kubelet[3994]: I1216 13:05:05.821648 3994 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:05.821725 kubelet[3994]: I1216 13:05:05.821655 3994 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:05:05.821725 kubelet[3994]: E1216 13:05:05.821684 3994 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:05.868135 kubelet[3994]: I1216 13:05:05.868113 3994 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:05.868135 kubelet[3994]: I1216 13:05:05.868127 3994 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:05.868265 kubelet[3994]: I1216 13:05:05.868145 3994 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:05.868291 kubelet[3994]: I1216 13:05:05.868276 3994 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:05:05.868313 kubelet[3994]: I1216 13:05:05.868285 3994 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:05:05.868313 kubelet[3994]: I1216 13:05:05.868302 3994 policy_none.go:49] "None policy: Start" Dec 16 13:05:05.868313 kubelet[3994]: I1216 13:05:05.868312 3994 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:05:05.868376 kubelet[3994]: I1216 13:05:05.868322 3994 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:05:05.869058 kubelet[3994]: I1216 13:05:05.868407 3994 state_mem.go:75] "Updated machine memory state" Dec 16 13:05:05.873381 kubelet[3994]: E1216 13:05:05.873359 3994 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:05.873514 kubelet[3994]: I1216 13:05:05.873502 3994 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:05.873897 kubelet[3994]: I1216 13:05:05.873518 3994 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:05.876017 kubelet[3994]: I1216 13:05:05.876007 3994 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:05.878578 kubelet[3994]: E1216 13:05:05.878564 3994 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:05.922890 kubelet[3994]: I1216 13:05:05.922873 3994 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.923250 kubelet[3994]: I1216 13:05:05.923076 3994 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.924970 kubelet[3994]: I1216 13:05:05.923144 3994 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.936360 kubelet[3994]: I1216 13:05:05.936295 3994 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:05.936495 kubelet[3994]: E1216 13:05:05.936482 3994 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.937477 kubelet[3994]: I1216 13:05:05.937447 3994 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:05.937641 kubelet[3994]: E1216 13:05:05.937568 3994 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-968fde264e\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.937824 kubelet[3994]: I1216 13:05:05.937814 3994 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:05.937960 kubelet[3994]: E1216 13:05:05.937902 3994 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-968fde264e\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.982321 kubelet[3994]: I1216 13:05:05.982244 3994 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.994003 kubelet[3994]: I1216 13:05:05.993895 3994 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:05.994003 kubelet[3994]: I1216 13:05:05.993955 3994 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009004 kubelet[3994]: I1216 13:05:06.008287 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009004 kubelet[3994]: I1216 13:05:06.008369 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009004 kubelet[3994]: I1216 13:05:06.008418 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7291f6e8a5349c84efd675014d304c91-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-968fde264e\" (UID: \"7291f6e8a5349c84efd675014d304c91\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009004 kubelet[3994]: I1216 13:05:06.008438 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009004 kubelet[3994]: I1216 13:05:06.008480 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009146 kubelet[3994]: I1216 13:05:06.008499 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e12d78fc276679233c1e13f0da02538-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-968fde264e\" (UID: \"0e12d78fc276679233c1e13f0da02538\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009146 kubelet[3994]: I1216 13:05:06.008521 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009146 kubelet[3994]: I1216 13:05:06.008573 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.009146 kubelet[3994]: I1216 13:05:06.008595 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8ca13fabe96b19e007c9fbcf6409dc9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-968fde264e\" (UID: \"c8ca13fabe96b19e007c9fbcf6409dc9\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-968fde264e" Dec 16 13:05:06.769022 kubelet[3994]: I1216 13:05:06.768990 3994 apiserver.go:52] "Watching apiserver" Dec 16 13:05:06.808517 kubelet[3994]: I1216 13:05:06.808485 3994 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:06.854923 kubelet[3994]: I1216 13:05:06.854865 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-968fde264e" podStartSLOduration=2.8548250729999998 podStartE2EDuration="2.854825073s" podCreationTimestamp="2025-12-16 13:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:06.854455173 +0000 UTC m=+1.143367456" watchObservedRunningTime="2025-12-16 13:05:06.854825073 +0000 UTC m=+1.143737354" Dec 16 13:05:09.187560 kubelet[3994]: I1216 13:05:09.187521 3994 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:05:09.188190 containerd[2510]: time="2025-12-16T13:05:09.187813561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:05:09.188489 kubelet[3994]: I1216 13:05:09.188471 3994 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:05:10.039972 systemd[1]: Created slice kubepods-besteffort-pod66e921d7_1c56_4bc6_9c8f_3843e87e4fe4.slice - libcontainer container kubepods-besteffort-pod66e921d7_1c56_4bc6_9c8f_3843e87e4fe4.slice. Dec 16 13:05:10.133371 kubelet[3994]: I1216 13:05:10.133314 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66e921d7-1c56-4bc6-9c8f-3843e87e4fe4-lib-modules\") pod \"kube-proxy-hbhff\" (UID: \"66e921d7-1c56-4bc6-9c8f-3843e87e4fe4\") " pod="kube-system/kube-proxy-hbhff" Dec 16 13:05:10.133371 kubelet[3994]: I1216 13:05:10.133369 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66e921d7-1c56-4bc6-9c8f-3843e87e4fe4-kube-proxy\") pod \"kube-proxy-hbhff\" (UID: \"66e921d7-1c56-4bc6-9c8f-3843e87e4fe4\") " pod="kube-system/kube-proxy-hbhff" Dec 16 13:05:10.133525 kubelet[3994]: I1216 13:05:10.133399 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpl6\" (UniqueName: \"kubernetes.io/projected/66e921d7-1c56-4bc6-9c8f-3843e87e4fe4-kube-api-access-bdpl6\") pod \"kube-proxy-hbhff\" (UID: \"66e921d7-1c56-4bc6-9c8f-3843e87e4fe4\") " pod="kube-system/kube-proxy-hbhff" Dec 16 13:05:10.133525 kubelet[3994]: I1216 13:05:10.133450 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66e921d7-1c56-4bc6-9c8f-3843e87e4fe4-xtables-lock\") pod \"kube-proxy-hbhff\" (UID: \"66e921d7-1c56-4bc6-9c8f-3843e87e4fe4\") " pod="kube-system/kube-proxy-hbhff" Dec 16 13:05:10.348979 containerd[2510]: time="2025-12-16T13:05:10.348839670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbhff,Uid:66e921d7-1c56-4bc6-9c8f-3843e87e4fe4,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:10.419566 systemd[1]: Created slice kubepods-besteffort-podc755f609_7c62_4032_a733_7e19b03f65b8.slice - libcontainer container kubepods-besteffort-podc755f609_7c62_4032_a733_7e19b03f65b8.slice. Dec 16 13:05:10.423880 containerd[2510]: time="2025-12-16T13:05:10.423584374Z" level=info msg="connecting to shim 7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f" address="unix:///run/containerd/s/6f340746996cd1cc31b0b1efac465baaf686d5b59b55a6bc671dec15dbd52a8f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:10.435309 kubelet[3994]: I1216 13:05:10.435203 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c755f609-7c62-4032-a733-7e19b03f65b8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m8958\" (UID: \"c755f609-7c62-4032-a733-7e19b03f65b8\") " pod="tigera-operator/tigera-operator-7dcd859c48-m8958" Dec 16 13:05:10.435309 kubelet[3994]: I1216 13:05:10.435247 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzgb\" (UniqueName: \"kubernetes.io/projected/c755f609-7c62-4032-a733-7e19b03f65b8-kube-api-access-bdzgb\") pod \"tigera-operator-7dcd859c48-m8958\" (UID: \"c755f609-7c62-4032-a733-7e19b03f65b8\") " pod="tigera-operator/tigera-operator-7dcd859c48-m8958" Dec 16 13:05:10.450008 systemd[1]: Started cri-containerd-7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f.scope - libcontainer container 7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f. Dec 16 13:05:10.457000 audit: BPF prog-id=157 op=LOAD Dec 16 13:05:10.459194 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:05:10.459241 kernel: audit: type=1334 audit(1765890310.457:452): prog-id=157 op=LOAD Dec 16 13:05:10.458000 audit: BPF prog-id=158 op=LOAD Dec 16 13:05:10.461459 kernel: audit: type=1334 audit(1765890310.458:453): prog-id=158 op=LOAD Dec 16 13:05:10.458000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.464991 kernel: audit: type=1300 audit(1765890310.458:453): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.468964 kernel: audit: type=1327 audit(1765890310.458:453): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.458000 audit: BPF prog-id=158 op=UNLOAD Dec 16 13:05:10.470937 kernel: audit: type=1334 audit(1765890310.458:454): prog-id=158 op=UNLOAD Dec 16 13:05:10.458000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.474521 kernel: audit: type=1300 audit(1765890310.458:454): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.479869 kernel: audit: type=1327 audit(1765890310.458:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.480070 kernel: audit: type=1334 audit(1765890310.459:455): prog-id=159 op=LOAD Dec 16 13:05:10.459000 audit: BPF prog-id=159 op=LOAD Dec 16 13:05:10.459000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.491274 kernel: audit: type=1300 audit(1765890310.459:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.499155 kernel: audit: type=1327 audit(1765890310.459:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.459000 audit: BPF prog-id=160 op=LOAD Dec 16 13:05:10.459000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.459000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:05:10.459000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.459000 audit: BPF prog-id=159 op=UNLOAD Dec 16 13:05:10.459000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.459000 audit: BPF prog-id=161 op=LOAD Dec 16 13:05:10.459000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737393664303036316136323637343465343464626466383432313535 Dec 16 13:05:10.509923 containerd[2510]: time="2025-12-16T13:05:10.509889016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbhff,Uid:66e921d7-1c56-4bc6-9c8f-3843e87e4fe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f\"" Dec 16 13:05:10.517995 containerd[2510]: time="2025-12-16T13:05:10.517969155Z" level=info msg="CreateContainer within sandbox \"7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:05:10.544907 containerd[2510]: time="2025-12-16T13:05:10.544324873Z" level=info msg="Container 63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:10.572706 containerd[2510]: time="2025-12-16T13:05:10.572675846Z" level=info msg="CreateContainer within sandbox \"7796d0061a626744e44dbdf84215512495ac1bac205351f6abe41bd3ac985e1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028\"" Dec 16 13:05:10.573168 containerd[2510]: time="2025-12-16T13:05:10.573145027Z" level=info msg="StartContainer for \"63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028\"" Dec 16 13:05:10.574692 containerd[2510]: time="2025-12-16T13:05:10.574634684Z" level=info msg="connecting to shim 63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028" address="unix:///run/containerd/s/6f340746996cd1cc31b0b1efac465baaf686d5b59b55a6bc671dec15dbd52a8f" protocol=ttrpc version=3 Dec 16 13:05:10.594012 systemd[1]: Started cri-containerd-63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028.scope - libcontainer container 63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028. Dec 16 13:05:10.631000 audit: BPF prog-id=162 op=LOAD Dec 16 13:05:10.631000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4053 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633623465396539646332313966626632636138313032353935653635 Dec 16 13:05:10.631000 audit: BPF prog-id=163 op=LOAD Dec 16 13:05:10.631000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4053 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633623465396539646332313966626632636138313032353935653635 Dec 16 13:05:10.631000 audit: BPF prog-id=163 op=UNLOAD Dec 16 13:05:10.631000 audit[4091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633623465396539646332313966626632636138313032353935653635 Dec 16 13:05:10.631000 audit: BPF prog-id=162 op=UNLOAD Dec 16 13:05:10.631000 audit[4091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633623465396539646332313966626632636138313032353935653635 Dec 16 13:05:10.631000 audit: BPF prog-id=164 op=LOAD Dec 16 13:05:10.631000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4053 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633623465396539646332313966626632636138313032353935653635 Dec 16 13:05:10.653521 containerd[2510]: time="2025-12-16T13:05:10.653420842Z" level=info msg="StartContainer for \"63b4e9e9dc219fbf2ca8102595e654128ec687d1cb6b0d09c269e26b2ab2e028\" returns successfully" Dec 16 13:05:10.726190 containerd[2510]: time="2025-12-16T13:05:10.726140124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m8958,Uid:c755f609-7c62-4032-a733-7e19b03f65b8,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:05:10.775000 audit[4167]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:10.775000 audit[4167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde01765a0 a2=0 a3=7ffde017658c items=0 ppid=4104 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:05:10.777000 audit[4163]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.777000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbde7fc90 a2=0 a3=34cbfb796294efa8 items=0 ppid=4104 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.777000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:05:10.778717 containerd[2510]: time="2025-12-16T13:05:10.778153632Z" level=info msg="connecting to shim 5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68" address="unix:///run/containerd/s/04e708ed507042982fc43ca5f4abb096529db79035d544dc9f35f177bde12c0b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:10.784000 audit[4176]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.784000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfd739500 a2=0 a3=7ffcfd7394ec items=0 ppid=4104 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:05:10.786000 audit[4177]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=4177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:10.786000 audit[4177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe936d05f0 a2=0 a3=7ffe936d05dc items=0 ppid=4104 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:05:10.788000 audit[4182]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.788000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb300c690 a2=0 a3=7ffeb300c67c items=0 ppid=4104 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:05:10.791000 audit[4185]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:10.791000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef2896780 a2=0 a3=7ffef289676c items=0 ppid=4104 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:05:10.807060 systemd[1]: Started cri-containerd-5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68.scope - libcontainer container 5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68. Dec 16 13:05:10.814000 audit: BPF prog-id=165 op=LOAD Dec 16 13:05:10.814000 audit: BPF prog-id=166 op=LOAD Dec 16 13:05:10.814000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.814000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:05:10.814000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.815000 audit: BPF prog-id=167 op=LOAD Dec 16 13:05:10.815000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.815000 audit: BPF prog-id=168 op=LOAD Dec 16 13:05:10.815000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.815000 audit: BPF prog-id=168 op=UNLOAD Dec 16 13:05:10.815000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.815000 audit: BPF prog-id=167 op=UNLOAD Dec 16 13:05:10.815000 audit[4187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.816000 audit: BPF prog-id=169 op=LOAD Dec 16 13:05:10.816000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4165 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565643364663761393531336231333661643137636237633537613539 Dec 16 13:05:10.850360 containerd[2510]: time="2025-12-16T13:05:10.850328021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m8958,Uid:c755f609-7c62-4032-a733-7e19b03f65b8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68\"" Dec 16 13:05:10.852712 containerd[2510]: time="2025-12-16T13:05:10.852634975Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:05:10.864645 kubelet[3994]: I1216 13:05:10.864437 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hbhff" podStartSLOduration=0.864419457 podStartE2EDuration="864.419457ms" podCreationTimestamp="2025-12-16 13:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:10.864368519 +0000 UTC m=+5.153280802" watchObservedRunningTime="2025-12-16 13:05:10.864419457 +0000 UTC m=+5.153331746" Dec 16 13:05:10.879000 audit[4213]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.879000 audit[4213]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff5125f0b0 a2=0 a3=7fff5125f09c items=0 ppid=4104 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:05:10.882000 audit[4215]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.882000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd449ae6c0 a2=0 a3=7ffd449ae6ac items=0 ppid=4104 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 13:05:10.886000 audit[4218]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.886000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd052c7f90 a2=0 a3=7ffd052c7f7c items=0 ppid=4104 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 13:05:10.887000 audit[4219]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.887000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbe8ea6f0 a2=0 a3=7ffdbe8ea6dc items=0 ppid=4104 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:05:10.889000 audit[4221]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.889000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc70b03130 a2=0 a3=7ffc70b0311c items=0 ppid=4104 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:05:10.890000 audit[4222]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.890000 audit[4222]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc57dca420 a2=0 a3=7ffc57dca40c items=0 ppid=4104 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:05:10.892000 audit[4224]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.892000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca39107e0 a2=0 a3=7ffca39107cc items=0 ppid=4104 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:05:10.895000 audit[4227]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.895000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc20b6f680 a2=0 a3=7ffc20b6f66c items=0 ppid=4104 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 13:05:10.897000 audit[4228]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.897000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb2b2ac00 a2=0 a3=7ffeb2b2abec items=0 ppid=4104 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:05:10.899000 audit[4230]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.899000 audit[4230]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf4c91710 a2=0 a3=7ffcf4c916fc items=0 ppid=4104 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:05:10.900000 audit[4231]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.900000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedbd76a00 a2=0 a3=7ffedbd769ec items=0 ppid=4104 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.900000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:05:10.903000 audit[4233]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.903000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcffd24b90 a2=0 a3=7ffcffd24b7c items=0 ppid=4104 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:05:10.906000 audit[4236]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.906000 audit[4236]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc62020120 a2=0 a3=7ffc6202010c items=0 ppid=4104 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:05:10.909000 audit[4239]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.909000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfdc05170 a2=0 a3=7ffdfdc0515c items=0 ppid=4104 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:05:10.910000 audit[4240]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.910000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdfc334470 a2=0 a3=7ffdfc33445c items=0 ppid=4104 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:05:10.912000 audit[4242]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.912000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe6358f490 a2=0 a3=7ffe6358f47c items=0 ppid=4104 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:10.915000 audit[4245]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.915000 audit[4245]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe300be170 a2=0 a3=7ffe300be15c items=0 ppid=4104 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:10.916000 audit[4246]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.916000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00843ca0 a2=0 a3=7ffc00843c8c items=0 ppid=4104 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:05:10.918000 audit[4248]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:05:10.918000 audit[4248]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdeb879260 a2=0 a3=7ffdeb87924c items=0 ppid=4104 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:10.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:05:11.005000 audit[4254]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.005000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6283aea0 a2=0 a3=7ffc6283ae8c items=0 ppid=4104 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.032000 audit[4254]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:11.032000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc6283aea0 a2=0 a3=7ffc6283ae8c items=0 ppid=4104 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.034000 audit[4259]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.034000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc1952f350 a2=0 a3=7ffc1952f33c items=0 ppid=4104 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:05:11.037000 audit[4261]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.037000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcc931a720 a2=0 a3=7ffcc931a70c items=0 ppid=4104 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 13:05:11.041000 audit[4264]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.041000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff30d91730 a2=0 a3=7fff30d9171c items=0 ppid=4104 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 13:05:11.042000 audit[4265]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.042000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee425ef00 a2=0 a3=7ffee425eeec items=0 ppid=4104 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:05:11.045000 audit[4267]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.045000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff965cb900 a2=0 a3=7fff965cb8ec items=0 ppid=4104 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:05:11.046000 audit[4268]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.046000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe613a5b0 a2=0 a3=7fffe613a59c items=0 ppid=4104 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:05:11.048000 audit[4270]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.048000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd2ef71120 a2=0 a3=7ffd2ef7110c items=0 ppid=4104 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 13:05:11.051000 audit[4273]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.051000 audit[4273]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffebe7147e0 a2=0 a3=7ffebe7147cc items=0 ppid=4104 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:05:11.052000 audit[4274]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.052000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1859ade0 a2=0 a3=7ffd1859adcc items=0 ppid=4104 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:05:11.055000 audit[4276]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.055000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe41ec9d20 a2=0 a3=7ffe41ec9d0c items=0 ppid=4104 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:05:11.056000 audit[4277]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.056000 audit[4277]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef5d77790 a2=0 a3=7ffef5d7777c items=0 ppid=4104 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:05:11.058000 audit[4279]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.058000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd1dd06c0 a2=0 a3=7ffcd1dd06ac items=0 ppid=4104 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:05:11.061000 audit[4282]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.061000 audit[4282]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0f15c480 a2=0 a3=7ffd0f15c46c items=0 ppid=4104 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:05:11.065000 audit[4285]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.065000 audit[4285]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0e30c340 a2=0 a3=7ffd0e30c32c items=0 ppid=4104 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 13:05:11.068000 audit[4286]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.068000 audit[4286]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc06b68350 a2=0 a3=7ffc06b6833c items=0 ppid=4104 pid=4286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:05:11.070000 audit[4288]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.070000 audit[4288]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc70d011b0 a2=0 a3=7ffc70d0119c items=0 ppid=4104 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:11.074000 audit[4291]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.074000 audit[4291]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4f5f00c0 a2=0 a3=7ffd4f5f00ac items=0 ppid=4104 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:05:11.075000 audit[4292]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.075000 audit[4292]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6e25a270 a2=0 a3=7fff6e25a25c items=0 ppid=4104 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:05:11.077000 audit[4294]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.077000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcf6e0c850 a2=0 a3=7ffcf6e0c83c items=0 ppid=4104 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:05:11.078000 audit[4295]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.078000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe100aafd0 a2=0 a3=7ffe100aafbc items=0 ppid=4104 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:05:11.080000 audit[4297]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.080000 audit[4297]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff63e24010 a2=0 a3=7fff63e23ffc items=0 ppid=4104 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:11.083000 audit[4300]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:05:11.083000 audit[4300]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff67e26100 a2=0 a3=7fff67e260ec items=0 ppid=4104 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:05:11.089000 audit[4302]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:05:11.089000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd4e66fcb0 a2=0 a3=7ffd4e66fc9c items=0 ppid=4104 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.089000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:11.090000 audit[4302]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:05:11.090000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd4e66fcb0 a2=0 a3=7ffd4e66fc9c items=0 ppid=4104 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:11.090000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:13.848233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17569827.mount: Deactivated successfully. Dec 16 13:05:14.624760 containerd[2510]: time="2025-12-16T13:05:14.624714778Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:14.628188 containerd[2510]: time="2025-12-16T13:05:14.628150133Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 13:05:14.634066 containerd[2510]: time="2025-12-16T13:05:14.633881531Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:14.639222 containerd[2510]: time="2025-12-16T13:05:14.639192026Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:14.639693 containerd[2510]: time="2025-12-16T13:05:14.639672820Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.786950894s" Dec 16 13:05:14.639861 containerd[2510]: time="2025-12-16T13:05:14.639774026Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:05:14.650660 containerd[2510]: time="2025-12-16T13:05:14.650634332Z" level=info msg="CreateContainer within sandbox \"5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:05:14.678273 containerd[2510]: time="2025-12-16T13:05:14.678244160Z" level=info msg="Container f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:14.700443 containerd[2510]: time="2025-12-16T13:05:14.700418692Z" level=info msg="CreateContainer within sandbox \"5ed3df7a9513b136ad17cb7c57a5948701d7f5bbe2e42565913ca7319f01aa68\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f\"" Dec 16 13:05:14.701054 containerd[2510]: time="2025-12-16T13:05:14.700874972Z" level=info msg="StartContainer for \"f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f\"" Dec 16 13:05:14.702068 containerd[2510]: time="2025-12-16T13:05:14.701991287Z" level=info msg="connecting to shim f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f" address="unix:///run/containerd/s/04e708ed507042982fc43ca5f4abb096529db79035d544dc9f35f177bde12c0b" protocol=ttrpc version=3 Dec 16 13:05:14.722036 systemd[1]: Started cri-containerd-f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f.scope - libcontainer container f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f. Dec 16 13:05:14.729000 audit: BPF prog-id=170 op=LOAD Dec 16 13:05:14.729000 audit: BPF prog-id=171 op=LOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=171 op=UNLOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=172 op=LOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=173 op=LOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=173 op=UNLOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=172 op=UNLOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.729000 audit: BPF prog-id=174 op=LOAD Dec 16 13:05:14.729000 audit[4311]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4165 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:14.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631656335633062613436306632346465613837363735393337336138 Dec 16 13:05:14.755949 containerd[2510]: time="2025-12-16T13:05:14.755920807Z" level=info msg="StartContainer for \"f1ec5c0ba460f24dea876759373a8275be9c6861add877ad01722185c996068f\" returns successfully" Dec 16 13:05:14.869999 kubelet[3994]: I1216 13:05:14.869946 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m8958" podStartSLOduration=1.081507101 podStartE2EDuration="4.869926696s" podCreationTimestamp="2025-12-16 13:05:10 +0000 UTC" firstStartedPulling="2025-12-16 13:05:10.85203129 +0000 UTC m=+5.140943564" lastFinishedPulling="2025-12-16 13:05:14.640450883 +0000 UTC m=+8.929363159" observedRunningTime="2025-12-16 13:05:14.869647299 +0000 UTC m=+9.158559580" watchObservedRunningTime="2025-12-16 13:05:14.869926696 +0000 UTC m=+9.158839079" Dec 16 13:05:20.355501 sudo[2966]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:20.362245 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 13:05:20.362335 kernel: audit: type=1106 audit(1765890320.355:532): pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:20.355000 audit[2966]: USER_END pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:20.361000 audit[2966]: CRED_DISP pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:20.371239 kernel: audit: type=1104 audit(1765890320.361:533): pid=2966 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:05:20.459031 sshd[2965]: Connection closed by 10.200.16.10 port 44236 Dec 16 13:05:20.460032 sshd-session[2962]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:20.460000 audit[2962]: USER_END pid=2962 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:20.466944 kernel: audit: type=1106 audit(1765890320.460:534): pid=2962 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:20.467932 systemd-logind[2482]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:05:20.468900 systemd[1]: sshd@6-10.200.4.31:22-10.200.16.10:44236.service: Deactivated successfully. Dec 16 13:05:20.472099 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:05:20.461000 audit[2962]: CRED_DISP pid=2962 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:20.477865 kernel: audit: type=1104 audit(1765890320.461:535): pid=2962 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:05:20.477285 systemd[1]: session-9.scope: Consumed 3.811s CPU time, 231.6M memory peak. Dec 16 13:05:20.482571 systemd-logind[2482]: Removed session 9. Dec 16 13:05:20.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.31:22-10.200.16.10:44236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:20.490860 kernel: audit: type=1131 audit(1765890320.469:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.31:22-10.200.16.10:44236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:05:21.204000 audit[4393]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.209080 kernel: audit: type=1325 audit(1765890321.204:537): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.204000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe7c1acab0 a2=0 a3=7ffe7c1aca9c items=0 ppid=4104 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.218438 kernel: audit: type=1300 audit(1765890321.204:537): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe7c1acab0 a2=0 a3=7ffe7c1aca9c items=0 ppid=4104 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.218510 kernel: audit: type=1327 audit(1765890321.204:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:21.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:21.218000 audit[4393]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.224856 kernel: audit: type=1325 audit(1765890321.218:538): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.218000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7c1acab0 a2=0 a3=0 items=0 ppid=4104 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.234899 kernel: audit: type=1300 audit(1765890321.218:538): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7c1acab0 a2=0 a3=0 items=0 ppid=4104 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:21.295000 audit[4395]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.295000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff71fdaab0 a2=0 a3=7fff71fdaa9c items=0 ppid=4104 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:21.300000 audit[4395]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:21.300000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff71fdaab0 a2=0 a3=0 items=0 ppid=4104 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:21.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:23.571000 audit[4398]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:23.571000 audit[4398]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcd9dd0680 a2=0 a3=7ffcd9dd066c items=0 ppid=4104 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:23.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:23.575000 audit[4398]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:23.575000 audit[4398]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd9dd0680 a2=0 a3=0 items=0 ppid=4104 pid=4398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:23.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:23.589000 audit[4400]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:23.589000 audit[4400]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff5866e3f0 a2=0 a3=7fff5866e3dc items=0 ppid=4104 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:23.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:23.597000 audit[4400]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:23.597000 audit[4400]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5866e3f0 a2=0 a3=0 items=0 ppid=4104 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:23.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:24.607000 audit[4402]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:24.607000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd2b5ceb80 a2=0 a3=7ffd2b5ceb6c items=0 ppid=4104 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:24.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:24.617000 audit[4402]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:24.617000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2b5ceb80 a2=0 a3=0 items=0 ppid=4104 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:24.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:25.254343 systemd[1]: Created slice kubepods-besteffort-podc800952a_854e_473e_af09_9bdc6c776ae1.slice - libcontainer container kubepods-besteffort-podc800952a_854e_473e_af09_9bdc6c776ae1.slice. Dec 16 13:05:25.327716 kubelet[3994]: I1216 13:05:25.327673 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c800952a-854e-473e-af09-9bdc6c776ae1-tigera-ca-bundle\") pod \"calico-typha-767c68cc8d-wk4mb\" (UID: \"c800952a-854e-473e-af09-9bdc6c776ae1\") " pod="calico-system/calico-typha-767c68cc8d-wk4mb" Dec 16 13:05:25.327716 kubelet[3994]: I1216 13:05:25.327715 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c800952a-854e-473e-af09-9bdc6c776ae1-typha-certs\") pod \"calico-typha-767c68cc8d-wk4mb\" (UID: \"c800952a-854e-473e-af09-9bdc6c776ae1\") " pod="calico-system/calico-typha-767c68cc8d-wk4mb" Dec 16 13:05:25.329176 kubelet[3994]: I1216 13:05:25.327735 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vlpb\" (UniqueName: \"kubernetes.io/projected/c800952a-854e-473e-af09-9bdc6c776ae1-kube-api-access-2vlpb\") pod \"calico-typha-767c68cc8d-wk4mb\" (UID: \"c800952a-854e-473e-af09-9bdc6c776ae1\") " pod="calico-system/calico-typha-767c68cc8d-wk4mb" Dec 16 13:05:25.375884 systemd[1]: Created slice kubepods-besteffort-pod087e5f70_20b4_4130_b83f_3e42982fb2b2.slice - libcontainer container kubepods-besteffort-pod087e5f70_20b4_4130_b83f_3e42982fb2b2.slice. Dec 16 13:05:25.428664 kubelet[3994]: I1216 13:05:25.428608 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/087e5f70-20b4-4130-b83f-3e42982fb2b2-node-certs\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.428664 kubelet[3994]: I1216 13:05:25.428640 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-var-run-calico\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.428664 kubelet[3994]: I1216 13:05:25.428659 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-lib-modules\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430010 kubelet[3994]: I1216 13:05:25.428675 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/087e5f70-20b4-4130-b83f-3e42982fb2b2-tigera-ca-bundle\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430010 kubelet[3994]: I1216 13:05:25.428691 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-cni-log-dir\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430010 kubelet[3994]: I1216 13:05:25.428709 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-flexvol-driver-host\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430010 kubelet[3994]: I1216 13:05:25.428748 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-cni-bin-dir\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430010 kubelet[3994]: I1216 13:05:25.428775 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-cni-net-dir\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430136 kubelet[3994]: I1216 13:05:25.429048 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-policysync\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430136 kubelet[3994]: I1216 13:05:25.429080 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-var-lib-calico\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430136 kubelet[3994]: I1216 13:05:25.429102 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/087e5f70-20b4-4130-b83f-3e42982fb2b2-xtables-lock\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.430136 kubelet[3994]: I1216 13:05:25.429129 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z54tt\" (UniqueName: \"kubernetes.io/projected/087e5f70-20b4-4130-b83f-3e42982fb2b2-kube-api-access-z54tt\") pod \"calico-node-m5msp\" (UID: \"087e5f70-20b4-4130-b83f-3e42982fb2b2\") " pod="calico-system/calico-node-m5msp" Dec 16 13:05:25.532027 kubelet[3994]: E1216 13:05:25.531923 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.532027 kubelet[3994]: W1216 13:05:25.531948 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.532027 kubelet[3994]: E1216 13:05:25.531969 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.532200 kubelet[3994]: E1216 13:05:25.532160 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.532200 kubelet[3994]: W1216 13:05:25.532167 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.532200 kubelet[3994]: E1216 13:05:25.532176 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.532282 kubelet[3994]: E1216 13:05:25.532272 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.532282 kubelet[3994]: W1216 13:05:25.532277 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.532822 kubelet[3994]: E1216 13:05:25.532283 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.533150 kubelet[3994]: E1216 13:05:25.533116 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.533150 kubelet[3994]: W1216 13:05:25.533141 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.533236 kubelet[3994]: E1216 13:05:25.533155 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.533919 kubelet[3994]: E1216 13:05:25.533904 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.534949 kubelet[3994]: W1216 13:05:25.533921 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.534949 kubelet[3994]: E1216 13:05:25.533932 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.535322 kubelet[3994]: E1216 13:05:25.534998 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.535322 kubelet[3994]: W1216 13:05:25.535011 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.535322 kubelet[3994]: E1216 13:05:25.535024 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.535322 kubelet[3994]: E1216 13:05:25.535141 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.535322 kubelet[3994]: W1216 13:05:25.535147 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.535322 kubelet[3994]: E1216 13:05:25.535155 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.535950 kubelet[3994]: E1216 13:05:25.535936 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.535950 kubelet[3994]: W1216 13:05:25.535950 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.536037 kubelet[3994]: E1216 13:05:25.535962 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.536174 kubelet[3994]: E1216 13:05:25.536163 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.536204 kubelet[3994]: W1216 13:05:25.536174 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.536204 kubelet[3994]: E1216 13:05:25.536183 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.536925 kubelet[3994]: E1216 13:05:25.536322 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.536925 kubelet[3994]: W1216 13:05:25.536328 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.536925 kubelet[3994]: E1216 13:05:25.536335 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.537893 kubelet[3994]: E1216 13:05:25.537871 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.537893 kubelet[3994]: W1216 13:05:25.537887 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.537980 kubelet[3994]: E1216 13:05:25.537900 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.538057 kubelet[3994]: E1216 13:05:25.538050 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.538118 kubelet[3994]: W1216 13:05:25.538058 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.538118 kubelet[3994]: E1216 13:05:25.538066 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.538188 kubelet[3994]: E1216 13:05:25.538172 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.538188 kubelet[3994]: W1216 13:05:25.538177 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.538188 kubelet[3994]: E1216 13:05:25.538184 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.538318 kubelet[3994]: E1216 13:05:25.538309 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.538318 kubelet[3994]: W1216 13:05:25.538316 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.538371 kubelet[3994]: E1216 13:05:25.538325 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.549569 kubelet[3994]: E1216 13:05:25.549548 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.549569 kubelet[3994]: W1216 13:05:25.549567 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.549671 kubelet[3994]: E1216 13:05:25.549579 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.570791 containerd[2510]: time="2025-12-16T13:05:25.570729592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-767c68cc8d-wk4mb,Uid:c800952a-854e-473e-af09-9bdc6c776ae1,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:25.577645 kubelet[3994]: E1216 13:05:25.577434 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:25.610937 kubelet[3994]: E1216 13:05:25.610915 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611035 kubelet[3994]: W1216 13:05:25.610940 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611035 kubelet[3994]: E1216 13:05:25.610955 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611120 kubelet[3994]: E1216 13:05:25.611077 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611120 kubelet[3994]: W1216 13:05:25.611083 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611120 kubelet[3994]: E1216 13:05:25.611104 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611256 kubelet[3994]: E1216 13:05:25.611212 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611256 kubelet[3994]: W1216 13:05:25.611217 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611256 kubelet[3994]: E1216 13:05:25.611224 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611471 kubelet[3994]: E1216 13:05:25.611456 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611471 kubelet[3994]: W1216 13:05:25.611468 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611553 kubelet[3994]: E1216 13:05:25.611477 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611615 kubelet[3994]: E1216 13:05:25.611604 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611615 kubelet[3994]: W1216 13:05:25.611613 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611704 kubelet[3994]: E1216 13:05:25.611620 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611732 kubelet[3994]: E1216 13:05:25.611723 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611732 kubelet[3994]: W1216 13:05:25.611728 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611807 kubelet[3994]: E1216 13:05:25.611735 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.611877 kubelet[3994]: E1216 13:05:25.611828 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.611877 kubelet[3994]: W1216 13:05:25.611833 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.611877 kubelet[3994]: E1216 13:05:25.611859 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612004 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.612895 kubelet[3994]: W1216 13:05:25.612010 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612018 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612133 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.612895 kubelet[3994]: W1216 13:05:25.612138 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612142 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612239 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.612895 kubelet[3994]: W1216 13:05:25.612243 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612248 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.612895 kubelet[3994]: E1216 13:05:25.612351 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613065 kubelet[3994]: W1216 13:05:25.612358 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612366 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612473 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613065 kubelet[3994]: W1216 13:05:25.612478 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612483 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612579 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613065 kubelet[3994]: W1216 13:05:25.612584 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612588 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613065 kubelet[3994]: E1216 13:05:25.612679 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613065 kubelet[3994]: W1216 13:05:25.612736 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613236 kubelet[3994]: E1216 13:05:25.612741 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613292 kubelet[3994]: E1216 13:05:25.613279 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613292 kubelet[3994]: W1216 13:05:25.613289 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613414 kubelet[3994]: E1216 13:05:25.613297 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613414 kubelet[3994]: E1216 13:05:25.613412 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613472 kubelet[3994]: W1216 13:05:25.613417 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613472 kubelet[3994]: E1216 13:05:25.613424 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613551 kubelet[3994]: E1216 13:05:25.613538 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613551 kubelet[3994]: W1216 13:05:25.613543 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613551 kubelet[3994]: E1216 13:05:25.613549 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613691 kubelet[3994]: E1216 13:05:25.613643 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613691 kubelet[3994]: W1216 13:05:25.613648 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613691 kubelet[3994]: E1216 13:05:25.613655 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613891 kubelet[3994]: E1216 13:05:25.613766 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.613891 kubelet[3994]: W1216 13:05:25.613771 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.613891 kubelet[3994]: E1216 13:05:25.613778 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.613891 kubelet[3994]: E1216 13:05:25.613892 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.614029 kubelet[3994]: W1216 13:05:25.613897 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.614029 kubelet[3994]: E1216 13:05:25.613903 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.631800 kubelet[3994]: E1216 13:05:25.631768 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.631800 kubelet[3994]: W1216 13:05:25.631782 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.632115 kubelet[3994]: E1216 13:05:25.631804 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.632115 kubelet[3994]: I1216 13:05:25.631828 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/da16849a-2afd-49e7-91d5-6aafd4f3fe06-varrun\") pod \"csi-node-driver-sc2zv\" (UID: \"da16849a-2afd-49e7-91d5-6aafd4f3fe06\") " pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:25.634007 kubelet[3994]: E1216 13:05:25.633988 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.634007 kubelet[3994]: W1216 13:05:25.634006 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.634219 kubelet[3994]: E1216 13:05:25.634020 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.634219 kubelet[3994]: I1216 13:05:25.634158 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da16849a-2afd-49e7-91d5-6aafd4f3fe06-registration-dir\") pod \"csi-node-driver-sc2zv\" (UID: \"da16849a-2afd-49e7-91d5-6aafd4f3fe06\") " pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:25.634219 kubelet[3994]: E1216 13:05:25.634211 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.634219 kubelet[3994]: W1216 13:05:25.634217 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.634330 kubelet[3994]: E1216 13:05:25.634226 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.634354 kubelet[3994]: E1216 13:05:25.634337 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.634354 kubelet[3994]: W1216 13:05:25.634342 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.634354 kubelet[3994]: E1216 13:05:25.634349 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.635059 kubelet[3994]: E1216 13:05:25.634446 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.635059 kubelet[3994]: W1216 13:05:25.634453 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.635059 kubelet[3994]: E1216 13:05:25.634460 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.635059 kubelet[3994]: I1216 13:05:25.634476 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da16849a-2afd-49e7-91d5-6aafd4f3fe06-socket-dir\") pod \"csi-node-driver-sc2zv\" (UID: \"da16849a-2afd-49e7-91d5-6aafd4f3fe06\") " pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:25.635059 kubelet[3994]: E1216 13:05:25.634590 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.635059 kubelet[3994]: W1216 13:05:25.634597 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.635059 kubelet[3994]: E1216 13:05:25.634604 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.635059 kubelet[3994]: I1216 13:05:25.634617 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jvp\" (UniqueName: \"kubernetes.io/projected/da16849a-2afd-49e7-91d5-6aafd4f3fe06-kube-api-access-d5jvp\") pod \"csi-node-driver-sc2zv\" (UID: \"da16849a-2afd-49e7-91d5-6aafd4f3fe06\") " pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:25.638551 kubelet[3994]: E1216 13:05:25.638350 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.638551 kubelet[3994]: W1216 13:05:25.638367 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.638551 kubelet[3994]: E1216 13:05:25.638381 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.638740 containerd[2510]: time="2025-12-16T13:05:25.638705597Z" level=info msg="connecting to shim 6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a" address="unix:///run/containerd/s/a3a939942106eb35d3d304979ef064dabcb175e9f62ff1daa7feb0201e87ce58" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:25.639228 kubelet[3994]: E1216 13:05:25.639033 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.639228 kubelet[3994]: W1216 13:05:25.639048 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.639228 kubelet[3994]: E1216 13:05:25.639061 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.640204 kubelet[3994]: E1216 13:05:25.640028 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.640204 kubelet[3994]: W1216 13:05:25.640051 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.640204 kubelet[3994]: E1216 13:05:25.640065 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.640204 kubelet[3994]: I1216 13:05:25.640100 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da16849a-2afd-49e7-91d5-6aafd4f3fe06-kubelet-dir\") pod \"csi-node-driver-sc2zv\" (UID: \"da16849a-2afd-49e7-91d5-6aafd4f3fe06\") " pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:25.641917 kubelet[3994]: E1216 13:05:25.641894 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.641917 kubelet[3994]: W1216 13:05:25.641912 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.642111 kubelet[3994]: E1216 13:05:25.641925 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.642111 kubelet[3994]: E1216 13:05:25.642061 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.642111 kubelet[3994]: W1216 13:05:25.642067 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.642111 kubelet[3994]: E1216 13:05:25.642076 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.642209 kubelet[3994]: E1216 13:05:25.642183 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.642209 kubelet[3994]: W1216 13:05:25.642188 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.642209 kubelet[3994]: E1216 13:05:25.642195 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.648932 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 16 13:05:25.649011 kernel: audit: type=1325 audit(1765890325.641:547): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:25.641000 audit[4472]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.643387 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.649093 kubelet[3994]: W1216 13:05:25.643398 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.643411 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.644463 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.649093 kubelet[3994]: W1216 13:05:25.644529 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.644542 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.645779 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.649093 kubelet[3994]: W1216 13:05:25.645789 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.649093 kubelet[3994]: E1216 13:05:25.645800 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.641000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc81e39200 a2=0 a3=7ffc81e391ec items=0 ppid=4104 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.656861 kernel: audit: type=1300 audit(1765890325.641:547): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc81e39200 a2=0 a3=7ffc81e391ec items=0 ppid=4104 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:25.663892 kernel: audit: type=1327 audit(1765890325.641:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:25.668041 kernel: audit: type=1325 audit(1765890325.663:548): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:25.663000 audit[4472]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:25.675879 kernel: audit: type=1300 audit(1765890325.663:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc81e39200 a2=0 a3=0 items=0 ppid=4104 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.663000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc81e39200 a2=0 a3=0 items=0 ppid=4104 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.680291 containerd[2510]: time="2025-12-16T13:05:25.680064025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5msp,Uid:087e5f70-20b4-4130-b83f-3e42982fb2b2,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:25.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:25.687880 kernel: audit: type=1327 audit(1765890325.663:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:25.689073 systemd[1]: Started cri-containerd-6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a.scope - libcontainer container 6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a. Dec 16 13:05:25.711870 kernel: audit: type=1334 audit(1765890325.708:549): prog-id=175 op=LOAD Dec 16 13:05:25.708000 audit: BPF prog-id=175 op=LOAD Dec 16 13:05:25.713871 kernel: audit: type=1334 audit(1765890325.710:550): prog-id=176 op=LOAD Dec 16 13:05:25.721906 kernel: audit: type=1300 audit(1765890325.710:550): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: BPF prog-id=176 op=LOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.729948 kernel: audit: type=1327 audit(1765890325.710:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=176 op=UNLOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=177 op=LOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=178 op=LOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=178 op=UNLOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=177 op=UNLOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.710000 audit: BPF prog-id=179 op=LOAD Dec 16 13:05:25.710000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4465 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666623461303539336439306631393336663264643431663935653538 Dec 16 13:05:25.741221 kubelet[3994]: E1216 13:05:25.741203 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.741221 kubelet[3994]: W1216 13:05:25.741221 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.741346 kubelet[3994]: E1216 13:05:25.741238 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742043 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.742900 kubelet[3994]: W1216 13:05:25.742054 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742068 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742241 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.742900 kubelet[3994]: W1216 13:05:25.742248 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742257 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742471 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.742900 kubelet[3994]: W1216 13:05:25.742477 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.742900 kubelet[3994]: E1216 13:05:25.742485 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.743259 kubelet[3994]: E1216 13:05:25.743248 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.743259 kubelet[3994]: W1216 13:05:25.743259 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.743322 kubelet[3994]: E1216 13:05:25.743271 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.743891 kubelet[3994]: E1216 13:05:25.743878 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.743942 kubelet[3994]: W1216 13:05:25.743892 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.743942 kubelet[3994]: E1216 13:05:25.743903 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.744210 kubelet[3994]: E1216 13:05:25.744199 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.744254 kubelet[3994]: W1216 13:05:25.744210 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.744254 kubelet[3994]: E1216 13:05:25.744220 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.746077 kubelet[3994]: E1216 13:05:25.746060 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.746077 kubelet[3994]: W1216 13:05:25.746076 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.746183 kubelet[3994]: E1216 13:05:25.746090 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.746243 kubelet[3994]: E1216 13:05:25.746235 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.746272 kubelet[3994]: W1216 13:05:25.746244 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.746272 kubelet[3994]: E1216 13:05:25.746252 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.746385 kubelet[3994]: E1216 13:05:25.746378 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.746412 kubelet[3994]: W1216 13:05:25.746386 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.746412 kubelet[3994]: E1216 13:05:25.746393 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.746513 kubelet[3994]: E1216 13:05:25.746505 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.746537 kubelet[3994]: W1216 13:05:25.746513 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.746537 kubelet[3994]: E1216 13:05:25.746520 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.749081 kubelet[3994]: E1216 13:05:25.749062 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.749081 kubelet[3994]: W1216 13:05:25.749081 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.749186 kubelet[3994]: E1216 13:05:25.749093 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.749232 kubelet[3994]: E1216 13:05:25.749223 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.749263 kubelet[3994]: W1216 13:05:25.749232 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.749263 kubelet[3994]: E1216 13:05:25.749240 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.749355 kubelet[3994]: E1216 13:05:25.749347 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.749379 kubelet[3994]: W1216 13:05:25.749355 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.749379 kubelet[3994]: E1216 13:05:25.749363 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.749474 kubelet[3994]: E1216 13:05:25.749466 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.749500 kubelet[3994]: W1216 13:05:25.749474 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.749500 kubelet[3994]: E1216 13:05:25.749481 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.749896 kubelet[3994]: E1216 13:05:25.749886 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.749896 kubelet[3994]: W1216 13:05:25.749896 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.749956 kubelet[3994]: E1216 13:05:25.749905 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.750979 kubelet[3994]: E1216 13:05:25.750961 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.750979 kubelet[3994]: W1216 13:05:25.750977 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.751082 kubelet[3994]: E1216 13:05:25.750990 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.751163 kubelet[3994]: E1216 13:05:25.751154 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.751202 kubelet[3994]: W1216 13:05:25.751164 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.751202 kubelet[3994]: E1216 13:05:25.751172 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.751572 kubelet[3994]: E1216 13:05:25.751553 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.751618 kubelet[3994]: W1216 13:05:25.751577 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.751618 kubelet[3994]: E1216 13:05:25.751589 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.752951 kubelet[3994]: E1216 13:05:25.752931 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.752951 kubelet[3994]: W1216 13:05:25.752950 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.753054 kubelet[3994]: E1216 13:05:25.752962 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.754150 kubelet[3994]: E1216 13:05:25.754134 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.754217 kubelet[3994]: W1216 13:05:25.754169 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.754217 kubelet[3994]: E1216 13:05:25.754180 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.755067 kubelet[3994]: E1216 13:05:25.755049 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.755121 kubelet[3994]: W1216 13:05:25.755069 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.755121 kubelet[3994]: E1216 13:05:25.755082 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.756308 kubelet[3994]: E1216 13:05:25.755259 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.756308 kubelet[3994]: W1216 13:05:25.755266 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.756308 kubelet[3994]: E1216 13:05:25.755274 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.756308 kubelet[3994]: E1216 13:05:25.755493 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.756308 kubelet[3994]: W1216 13:05:25.755499 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.756308 kubelet[3994]: E1216 13:05:25.755506 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.757241 kubelet[3994]: E1216 13:05:25.757221 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.757241 kubelet[3994]: W1216 13:05:25.757240 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.757329 kubelet[3994]: E1216 13:05:25.757253 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.763245 containerd[2510]: time="2025-12-16T13:05:25.762675835Z" level=info msg="connecting to shim 17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639" address="unix:///run/containerd/s/c0d2a381e251d388021580d6c7672f12e199c3d85677d88d6ff4884aaad2e308" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:25.782017 kubelet[3994]: E1216 13:05:25.781997 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:25.782017 kubelet[3994]: W1216 13:05:25.782016 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:25.782157 kubelet[3994]: E1216 13:05:25.782028 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:25.813028 systemd[1]: Started cri-containerd-17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639.scope - libcontainer container 17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639. Dec 16 13:05:25.878000 audit: BPF prog-id=180 op=LOAD Dec 16 13:05:25.879000 audit: BPF prog-id=181 op=LOAD Dec 16 13:05:25.879000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.880000 audit: BPF prog-id=181 op=UNLOAD Dec 16 13:05:25.880000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.881000 audit: BPF prog-id=182 op=LOAD Dec 16 13:05:25.881000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.881000 audit: BPF prog-id=183 op=LOAD Dec 16 13:05:25.881000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.881000 audit: BPF prog-id=183 op=UNLOAD Dec 16 13:05:25.881000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.881000 audit: BPF prog-id=182 op=UNLOAD Dec 16 13:05:25.881000 audit[4555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.881000 audit: BPF prog-id=184 op=LOAD Dec 16 13:05:25.881000 audit[4555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4544 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:25.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137623635336465656337663338336435666631653737656161343166 Dec 16 13:05:25.901225 containerd[2510]: time="2025-12-16T13:05:25.900838448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-767c68cc8d-wk4mb,Uid:c800952a-854e-473e-af09-9bdc6c776ae1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a\"" Dec 16 13:05:25.905411 containerd[2510]: time="2025-12-16T13:05:25.905235585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:05:25.911956 containerd[2510]: time="2025-12-16T13:05:25.911895889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m5msp,Uid:087e5f70-20b4-4130-b83f-3e42982fb2b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\"" Dec 16 13:05:26.822441 kubelet[3994]: E1216 13:05:26.822392 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:27.517996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203678831.mount: Deactivated successfully. Dec 16 13:05:28.659234 containerd[2510]: time="2025-12-16T13:05:28.659189696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.663688 containerd[2510]: time="2025-12-16T13:05:28.663652399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:05:28.667143 containerd[2510]: time="2025-12-16T13:05:28.667090266Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.673266 containerd[2510]: time="2025-12-16T13:05:28.673112506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:28.673730 containerd[2510]: time="2025-12-16T13:05:28.673702561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.768287129s" Dec 16 13:05:28.673771 containerd[2510]: time="2025-12-16T13:05:28.673730164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:05:28.676009 containerd[2510]: time="2025-12-16T13:05:28.675969873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:05:28.713958 containerd[2510]: time="2025-12-16T13:05:28.713928475Z" level=info msg="CreateContainer within sandbox \"6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:05:28.745997 containerd[2510]: time="2025-12-16T13:05:28.743137063Z" level=info msg="Container c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:28.773551 containerd[2510]: time="2025-12-16T13:05:28.773514821Z" level=info msg="CreateContainer within sandbox \"6fb4a0593d90f1936f2dd41f95e580f9712b25ddf978707418c1e20bbb87d14a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402\"" Dec 16 13:05:28.775055 containerd[2510]: time="2025-12-16T13:05:28.774905171Z" level=info msg="StartContainer for \"c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402\"" Dec 16 13:05:28.777893 containerd[2510]: time="2025-12-16T13:05:28.777856584Z" level=info msg="connecting to shim c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402" address="unix:///run/containerd/s/a3a939942106eb35d3d304979ef064dabcb175e9f62ff1daa7feb0201e87ce58" protocol=ttrpc version=3 Dec 16 13:05:28.800048 systemd[1]: Started cri-containerd-c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402.scope - libcontainer container c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402. Dec 16 13:05:28.809000 audit: BPF prog-id=185 op=LOAD Dec 16 13:05:28.810000 audit: BPF prog-id=186 op=LOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=186 op=UNLOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=187 op=LOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=188 op=LOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=188 op=UNLOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=187 op=UNLOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.810000 audit: BPF prog-id=189 op=LOAD Dec 16 13:05:28.810000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4465 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:28.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336396163343335663063323563313233366466393262306366373139 Dec 16 13:05:28.822150 kubelet[3994]: E1216 13:05:28.822111 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:28.852863 containerd[2510]: time="2025-12-16T13:05:28.852817402Z" level=info msg="StartContainer for \"c69ac435f0c25c1236df92b0cf719f748776c87fff491a098ef1ab4ace062402\" returns successfully" Dec 16 13:05:28.935866 kubelet[3994]: E1216 13:05:28.934665 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.935866 kubelet[3994]: W1216 13:05:28.934690 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.936166 kubelet[3994]: E1216 13:05:28.936036 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.936291 kubelet[3994]: E1216 13:05:28.936281 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.936389 kubelet[3994]: W1216 13:05:28.936341 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.936389 kubelet[3994]: E1216 13:05:28.936357 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.936628 kubelet[3994]: E1216 13:05:28.936606 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.936718 kubelet[3994]: W1216 13:05:28.936618 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.936718 kubelet[3994]: E1216 13:05:28.936683 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.937062 kubelet[3994]: E1216 13:05:28.936991 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.937062 kubelet[3994]: W1216 13:05:28.937009 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.937062 kubelet[3994]: E1216 13:05:28.937020 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.937330 kubelet[3994]: E1216 13:05:28.937304 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.937330 kubelet[3994]: W1216 13:05:28.937313 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.937428 kubelet[3994]: E1216 13:05:28.937397 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.937588 kubelet[3994]: E1216 13:05:28.937581 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.937678 kubelet[3994]: W1216 13:05:28.937635 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.937678 kubelet[3994]: E1216 13:05:28.937645 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.938895 kubelet[3994]: E1216 13:05:28.938015 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.939079 kubelet[3994]: W1216 13:05:28.938981 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.939079 kubelet[3994]: E1216 13:05:28.939002 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.939359 kubelet[3994]: E1216 13:05:28.939316 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.939359 kubelet[3994]: W1216 13:05:28.939327 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.939359 kubelet[3994]: E1216 13:05:28.939338 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.939704 kubelet[3994]: E1216 13:05:28.939633 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.939860 kubelet[3994]: W1216 13:05:28.939761 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.939860 kubelet[3994]: E1216 13:05:28.939784 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.941111 kubelet[3994]: E1216 13:05:28.941094 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.941263 kubelet[3994]: W1216 13:05:28.941209 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.941263 kubelet[3994]: E1216 13:05:28.941225 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.941500 kubelet[3994]: E1216 13:05:28.941470 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.941500 kubelet[3994]: W1216 13:05:28.941482 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.941654 kubelet[3994]: E1216 13:05:28.941579 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.941786 kubelet[3994]: E1216 13:05:28.941778 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.941834 kubelet[3994]: W1216 13:05:28.941828 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.941893 kubelet[3994]: E1216 13:05:28.941885 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.942627 kubelet[3994]: E1216 13:05:28.942612 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.942738 kubelet[3994]: W1216 13:05:28.942690 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.942738 kubelet[3994]: E1216 13:05:28.942703 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.942983 kubelet[3994]: E1216 13:05:28.942943 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.942983 kubelet[3994]: W1216 13:05:28.942950 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.942983 kubelet[3994]: E1216 13:05:28.942958 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.943190 kubelet[3994]: E1216 13:05:28.943138 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.943190 kubelet[3994]: W1216 13:05:28.943143 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.943190 kubelet[3994]: E1216 13:05:28.943150 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.964199 kubelet[3994]: E1216 13:05:28.964184 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.964446 kubelet[3994]: W1216 13:05:28.964364 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.964446 kubelet[3994]: E1216 13:05:28.964381 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.965856 kubelet[3994]: E1216 13:05:28.965810 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.966389 kubelet[3994]: W1216 13:05:28.966368 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.966579 kubelet[3994]: E1216 13:05:28.966553 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.967148 kubelet[3994]: E1216 13:05:28.967053 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.967148 kubelet[3994]: W1216 13:05:28.967066 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.967148 kubelet[3994]: E1216 13:05:28.967078 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.967821 kubelet[3994]: E1216 13:05:28.967672 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.967821 kubelet[3994]: W1216 13:05:28.967684 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.967821 kubelet[3994]: E1216 13:05:28.967695 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.968504 kubelet[3994]: E1216 13:05:28.968401 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.968636 kubelet[3994]: W1216 13:05:28.968569 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.968636 kubelet[3994]: E1216 13:05:28.968585 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.968996 kubelet[3994]: E1216 13:05:28.968984 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.969087 kubelet[3994]: W1216 13:05:28.969078 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.969186 kubelet[3994]: E1216 13:05:28.969134 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.969977 kubelet[3994]: E1216 13:05:28.969808 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.969977 kubelet[3994]: W1216 13:05:28.969821 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.969977 kubelet[3994]: E1216 13:05:28.969833 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.972334 kubelet[3994]: E1216 13:05:28.972160 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.972608 kubelet[3994]: W1216 13:05:28.972418 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.972608 kubelet[3994]: E1216 13:05:28.972434 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.973654 kubelet[3994]: E1216 13:05:28.973452 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.973654 kubelet[3994]: W1216 13:05:28.973466 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.973654 kubelet[3994]: E1216 13:05:28.973480 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.975613 kubelet[3994]: E1216 13:05:28.975386 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.975613 kubelet[3994]: W1216 13:05:28.975399 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.975613 kubelet[3994]: E1216 13:05:28.975413 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.977656 kubelet[3994]: E1216 13:05:28.977287 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.977656 kubelet[3994]: W1216 13:05:28.977303 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.977656 kubelet[3994]: E1216 13:05:28.977368 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.977656 kubelet[3994]: E1216 13:05:28.977570 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.977656 kubelet[3994]: W1216 13:05:28.977577 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.977656 kubelet[3994]: E1216 13:05:28.977586 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.978283 kubelet[3994]: E1216 13:05:28.978245 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.978283 kubelet[3994]: W1216 13:05:28.978259 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.978283 kubelet[3994]: E1216 13:05:28.978271 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.979362 kubelet[3994]: E1216 13:05:28.979294 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.979362 kubelet[3994]: W1216 13:05:28.979309 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.979362 kubelet[3994]: E1216 13:05:28.979321 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.981528 kubelet[3994]: E1216 13:05:28.981231 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.981528 kubelet[3994]: W1216 13:05:28.981249 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.981528 kubelet[3994]: E1216 13:05:28.981262 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.981828 kubelet[3994]: E1216 13:05:28.981794 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.983799 kubelet[3994]: W1216 13:05:28.982029 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.983799 kubelet[3994]: E1216 13:05:28.982042 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.983799 kubelet[3994]: E1216 13:05:28.982920 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.983799 kubelet[3994]: W1216 13:05:28.983198 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.983799 kubelet[3994]: E1216 13:05:28.983211 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:28.984451 kubelet[3994]: E1216 13:05:28.984430 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:28.985646 kubelet[3994]: W1216 13:05:28.984502 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:28.985646 kubelet[3994]: E1216 13:05:28.984517 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.858131 containerd[2510]: time="2025-12-16T13:05:29.858089868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.862870 containerd[2510]: time="2025-12-16T13:05:29.862812490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:29.867101 containerd[2510]: time="2025-12-16T13:05:29.867053769Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.871901 containerd[2510]: time="2025-12-16T13:05:29.871867818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:29.872585 containerd[2510]: time="2025-12-16T13:05:29.872561586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.196558426s" Dec 16 13:05:29.872647 containerd[2510]: time="2025-12-16T13:05:29.872588395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:05:29.879412 containerd[2510]: time="2025-12-16T13:05:29.879385901Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:05:29.886326 kubelet[3994]: I1216 13:05:29.886303 3994 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:29.902810 containerd[2510]: time="2025-12-16T13:05:29.901748914Z" level=info msg="Container 8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:29.922346 containerd[2510]: time="2025-12-16T13:05:29.922301954Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd\"" Dec 16 13:05:29.922936 containerd[2510]: time="2025-12-16T13:05:29.922771877Z" level=info msg="StartContainer for \"8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd\"" Dec 16 13:05:29.924686 containerd[2510]: time="2025-12-16T13:05:29.924641908Z" level=info msg="connecting to shim 8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd" address="unix:///run/containerd/s/c0d2a381e251d388021580d6c7672f12e199c3d85677d88d6ff4884aaad2e308" protocol=ttrpc version=3 Dec 16 13:05:29.945046 systemd[1]: Started cri-containerd-8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd.scope - libcontainer container 8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd. Dec 16 13:05:29.948726 kubelet[3994]: E1216 13:05:29.948660 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.948905 kubelet[3994]: W1216 13:05:29.948678 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.948905 kubelet[3994]: E1216 13:05:29.948829 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.949115 kubelet[3994]: E1216 13:05:29.949085 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.949115 kubelet[3994]: W1216 13:05:29.949104 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.949177 kubelet[3994]: E1216 13:05:29.949122 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.949276 kubelet[3994]: E1216 13:05:29.949251 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.949276 kubelet[3994]: W1216 13:05:29.949264 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.949276 kubelet[3994]: E1216 13:05:29.949271 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.949413 kubelet[3994]: E1216 13:05:29.949393 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.949413 kubelet[3994]: W1216 13:05:29.949411 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.949477 kubelet[3994]: E1216 13:05:29.949418 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.949881 kubelet[3994]: E1216 13:05:29.949749 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.949938 kubelet[3994]: W1216 13:05:29.949885 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.949938 kubelet[3994]: E1216 13:05:29.949896 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.950196 kubelet[3994]: E1216 13:05:29.950183 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.950196 kubelet[3994]: W1216 13:05:29.950193 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.950275 kubelet[3994]: E1216 13:05:29.950203 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.951046 kubelet[3994]: E1216 13:05:29.951015 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.951046 kubelet[3994]: W1216 13:05:29.951042 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.951160 kubelet[3994]: E1216 13:05:29.951056 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.951192 kubelet[3994]: E1216 13:05:29.951186 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.951223 kubelet[3994]: W1216 13:05:29.951192 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.951223 kubelet[3994]: E1216 13:05:29.951199 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.951406 kubelet[3994]: E1216 13:05:29.951393 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.951406 kubelet[3994]: W1216 13:05:29.951403 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.951706 kubelet[3994]: E1216 13:05:29.951411 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.951874 kubelet[3994]: E1216 13:05:29.951862 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.951874 kubelet[3994]: W1216 13:05:29.951873 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.951952 kubelet[3994]: E1216 13:05:29.951882 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.953097 kubelet[3994]: E1216 13:05:29.953080 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.953097 kubelet[3994]: W1216 13:05:29.953092 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.953195 kubelet[3994]: E1216 13:05:29.953103 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.953258 kubelet[3994]: E1216 13:05:29.953246 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.953258 kubelet[3994]: W1216 13:05:29.953255 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.953334 kubelet[3994]: E1216 13:05:29.953263 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.953436 kubelet[3994]: E1216 13:05:29.953425 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.953436 kubelet[3994]: W1216 13:05:29.953433 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.953501 kubelet[3994]: E1216 13:05:29.953440 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.953571 kubelet[3994]: E1216 13:05:29.953560 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.953571 kubelet[3994]: W1216 13:05:29.953568 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.953643 kubelet[3994]: E1216 13:05:29.953574 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.953810 kubelet[3994]: E1216 13:05:29.953798 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.953810 kubelet[3994]: W1216 13:05:29.953807 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.953899 kubelet[3994]: E1216 13:05:29.953815 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.976202 kubelet[3994]: E1216 13:05:29.976185 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.976202 kubelet[3994]: W1216 13:05:29.976220 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.976202 kubelet[3994]: E1216 13:05:29.976234 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.976513 kubelet[3994]: E1216 13:05:29.976505 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.976632 kubelet[3994]: W1216 13:05:29.976557 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.976632 kubelet[3994]: E1216 13:05:29.976570 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.976890 kubelet[3994]: E1216 13:05:29.976872 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.976890 kubelet[3994]: W1216 13:05:29.976885 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.976986 kubelet[3994]: E1216 13:05:29.976895 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.977059 kubelet[3994]: E1216 13:05:29.977049 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.977088 kubelet[3994]: W1216 13:05:29.977079 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.977122 kubelet[3994]: E1216 13:05:29.977087 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.977275 kubelet[3994]: E1216 13:05:29.977261 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.977447 kubelet[3994]: W1216 13:05:29.977327 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.977447 kubelet[3994]: E1216 13:05:29.977353 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.977621 kubelet[3994]: E1216 13:05:29.977614 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.977667 kubelet[3994]: W1216 13:05:29.977661 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.977861 kubelet[3994]: E1216 13:05:29.977699 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.977952 kubelet[3994]: E1216 13:05:29.977933 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.977984 kubelet[3994]: W1216 13:05:29.977954 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.977984 kubelet[3994]: E1216 13:05:29.977964 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.978241 kubelet[3994]: E1216 13:05:29.978222 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.978281 kubelet[3994]: W1216 13:05:29.978259 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.978281 kubelet[3994]: E1216 13:05:29.978269 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.978475 kubelet[3994]: E1216 13:05:29.978462 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.978475 kubelet[3994]: W1216 13:05:29.978473 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.978537 kubelet[3994]: E1216 13:05:29.978481 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.978655 kubelet[3994]: E1216 13:05:29.978631 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.978655 kubelet[3994]: W1216 13:05:29.978653 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.978711 kubelet[3994]: E1216 13:05:29.978661 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.978816 kubelet[3994]: E1216 13:05:29.978803 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.978816 kubelet[3994]: W1216 13:05:29.978814 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.978905 kubelet[3994]: E1216 13:05:29.978821 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.980397 kubelet[3994]: E1216 13:05:29.980351 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.980397 kubelet[3994]: W1216 13:05:29.980363 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.980397 kubelet[3994]: E1216 13:05:29.980377 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.980926 kubelet[3994]: E1216 13:05:29.980890 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.980926 kubelet[3994]: W1216 13:05:29.980902 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.980926 kubelet[3994]: E1216 13:05:29.980914 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.981828 kubelet[3994]: E1216 13:05:29.981815 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.982246 kubelet[3994]: W1216 13:05:29.981879 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.982246 kubelet[3994]: E1216 13:05:29.981893 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.982373 kubelet[3994]: E1216 13:05:29.982365 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.982413 kubelet[3994]: W1216 13:05:29.982406 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.981000 audit: BPF prog-id=190 op=LOAD Dec 16 13:05:29.981000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4544 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:29.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363336353230363636353836356564363633306238656335386666 Dec 16 13:05:29.981000 audit: BPF prog-id=191 op=LOAD Dec 16 13:05:29.981000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4544 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:29.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363336353230363636353836356564363633306238656335386666 Dec 16 13:05:29.981000 audit: BPF prog-id=191 op=UNLOAD Dec 16 13:05:29.981000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:29.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363336353230363636353836356564363633306238656335386666 Dec 16 13:05:29.981000 audit: BPF prog-id=190 op=UNLOAD Dec 16 13:05:29.981000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:29.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363336353230363636353836356564363633306238656335386666 Dec 16 13:05:29.981000 audit: BPF prog-id=192 op=LOAD Dec 16 13:05:29.981000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4544 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:29.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363336353230363636353836356564363633306238656335386666 Dec 16 13:05:29.983501 kubelet[3994]: E1216 13:05:29.982719 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.983985 kubelet[3994]: E1216 13:05:29.983736 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.983985 kubelet[3994]: W1216 13:05:29.983750 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.983985 kubelet[3994]: E1216 13:05:29.983762 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.984138 kubelet[3994]: E1216 13:05:29.984129 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.984252 kubelet[3994]: W1216 13:05:29.984242 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.984334 kubelet[3994]: E1216 13:05:29.984291 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:29.985007 kubelet[3994]: E1216 13:05:29.984900 3994 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:05:29.985007 kubelet[3994]: W1216 13:05:29.984911 3994 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:05:29.985007 kubelet[3994]: E1216 13:05:29.984936 3994 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:05:30.010746 containerd[2510]: time="2025-12-16T13:05:30.010685373Z" level=info msg="StartContainer for \"8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd\" returns successfully" Dec 16 13:05:30.016869 systemd[1]: cri-containerd-8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd.scope: Deactivated successfully. Dec 16 13:05:30.021163 containerd[2510]: time="2025-12-16T13:05:30.021062096Z" level=info msg="received container exit event container_id:\"8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd\" id:\"8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd\" pid:4690 exited_at:{seconds:1765890330 nanos:20466024}" Dec 16 13:05:30.021000 audit: BPF prog-id=192 op=UNLOAD Dec 16 13:05:30.050094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6365206665865ed6630b8ec58ff9dd8bea1d745a18624844d43cf931dd1cfd-rootfs.mount: Deactivated successfully. Dec 16 13:05:30.822352 kubelet[3994]: E1216 13:05:30.822288 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:30.915863 kubelet[3994]: I1216 13:05:30.915733 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-767c68cc8d-wk4mb" podStartSLOduration=3.144734331 podStartE2EDuration="5.915714927s" podCreationTimestamp="2025-12-16 13:05:25 +0000 UTC" firstStartedPulling="2025-12-16 13:05:25.903525896 +0000 UTC m=+20.192438178" lastFinishedPulling="2025-12-16 13:05:28.674506499 +0000 UTC m=+22.963418774" observedRunningTime="2025-12-16 13:05:28.901492971 +0000 UTC m=+23.190405252" watchObservedRunningTime="2025-12-16 13:05:30.915714927 +0000 UTC m=+25.204627226" Dec 16 13:05:32.822835 kubelet[3994]: E1216 13:05:32.822799 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:32.897115 containerd[2510]: time="2025-12-16T13:05:32.896359754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:05:34.822661 kubelet[3994]: E1216 13:05:34.822609 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:35.308976 containerd[2510]: time="2025-12-16T13:05:35.308932916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.312233 containerd[2510]: time="2025-12-16T13:05:35.312134888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 13:05:35.315930 containerd[2510]: time="2025-12-16T13:05:35.315905074Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.319713 containerd[2510]: time="2025-12-16T13:05:35.319660888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:35.320397 containerd[2510]: time="2025-12-16T13:05:35.320073293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.4236736s" Dec 16 13:05:35.320397 containerd[2510]: time="2025-12-16T13:05:35.320103047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:05:35.327297 containerd[2510]: time="2025-12-16T13:05:35.327269676Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:05:35.354181 containerd[2510]: time="2025-12-16T13:05:35.353027827Z" level=info msg="Container 7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:35.357812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828975918.mount: Deactivated successfully. Dec 16 13:05:35.373833 containerd[2510]: time="2025-12-16T13:05:35.373805444Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45\"" Dec 16 13:05:35.374985 containerd[2510]: time="2025-12-16T13:05:35.374207769Z" level=info msg="StartContainer for \"7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45\"" Dec 16 13:05:35.376053 containerd[2510]: time="2025-12-16T13:05:35.376026031Z" level=info msg="connecting to shim 7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45" address="unix:///run/containerd/s/c0d2a381e251d388021580d6c7672f12e199c3d85677d88d6ff4884aaad2e308" protocol=ttrpc version=3 Dec 16 13:05:35.399044 systemd[1]: Started cri-containerd-7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45.scope - libcontainer container 7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45. Dec 16 13:05:35.435000 audit: BPF prog-id=193 op=LOAD Dec 16 13:05:35.437218 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 16 13:05:35.437287 kernel: audit: type=1334 audit(1765890335.435:579): prog-id=193 op=LOAD Dec 16 13:05:35.442102 kernel: audit: type=1300 audit(1765890335.435:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.435000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.448258 kernel: audit: type=1327 audit(1765890335.435:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: BPF prog-id=194 op=LOAD Dec 16 13:05:35.437000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.453797 kernel: audit: type=1334 audit(1765890335.437:580): prog-id=194 op=LOAD Dec 16 13:05:35.453914 kernel: audit: type=1300 audit(1765890335.437:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.460808 kernel: audit: type=1327 audit(1765890335.437:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: BPF prog-id=194 op=UNLOAD Dec 16 13:05:35.467623 kernel: audit: type=1334 audit(1765890335.437:581): prog-id=194 op=UNLOAD Dec 16 13:05:35.467672 kernel: audit: type=1300 audit(1765890335.437:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.437000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.472635 kernel: audit: type=1327 audit(1765890335.437:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: BPF prog-id=193 op=UNLOAD Dec 16 13:05:35.437000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.437000 audit: BPF prog-id=195 op=LOAD Dec 16 13:05:35.437000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4544 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:35.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764323065376133336163663635623836666535363931383337323762 Dec 16 13:05:35.474864 kernel: audit: type=1334 audit(1765890335.437:582): prog-id=193 op=UNLOAD Dec 16 13:05:35.491162 containerd[2510]: time="2025-12-16T13:05:35.491123180Z" level=info msg="StartContainer for \"7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45\" returns successfully" Dec 16 13:05:36.699550 systemd[1]: cri-containerd-7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45.scope: Deactivated successfully. Dec 16 13:05:36.699903 systemd[1]: cri-containerd-7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45.scope: Consumed 411ms CPU time, 193.9M memory peak, 171.3M written to disk. Dec 16 13:05:36.702799 containerd[2510]: time="2025-12-16T13:05:36.702756984Z" level=info msg="received container exit event container_id:\"7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45\" id:\"7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45\" pid:4783 exited_at:{seconds:1765890336 nanos:702511864}" Dec 16 13:05:36.703000 audit: BPF prog-id=195 op=UNLOAD Dec 16 13:05:36.709176 kubelet[3994]: I1216 13:05:36.708076 3994 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:05:36.731131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d20e7a33acf65b86fe569183727bb13425c658ee8784f3315424e78e80b9a45-rootfs.mount: Deactivated successfully. Dec 16 13:05:36.908951 systemd[1]: Created slice kubepods-burstable-pod2a3a8948_f27a_4f11_916b_8b42b855e619.slice - libcontainer container kubepods-burstable-pod2a3a8948_f27a_4f11_916b_8b42b855e619.slice. Dec 16 13:05:36.913939 systemd[1]: Created slice kubepods-besteffort-podda16849a_2afd_49e7_91d5_6aafd4f3fe06.slice - libcontainer container kubepods-besteffort-podda16849a_2afd_49e7_91d5_6aafd4f3fe06.slice. Dec 16 13:05:36.919981 kubelet[3994]: I1216 13:05:36.919722 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2tx\" (UniqueName: \"kubernetes.io/projected/2a3a8948-f27a-4f11-916b-8b42b855e619-kube-api-access-tb2tx\") pod \"coredns-674b8bbfcf-bvkgt\" (UID: \"2a3a8948-f27a-4f11-916b-8b42b855e619\") " pod="kube-system/coredns-674b8bbfcf-bvkgt" Dec 16 13:05:36.919981 kubelet[3994]: I1216 13:05:36.919928 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3a8948-f27a-4f11-916b-8b42b855e619-config-volume\") pod \"coredns-674b8bbfcf-bvkgt\" (UID: \"2a3a8948-f27a-4f11-916b-8b42b855e619\") " pod="kube-system/coredns-674b8bbfcf-bvkgt" Dec 16 13:05:36.920429 containerd[2510]: time="2025-12-16T13:05:36.920398682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc2zv,Uid:da16849a-2afd-49e7-91d5-6aafd4f3fe06,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:37.058959 systemd[1]: Created slice kubepods-besteffort-poddd14feb0_ccbc_4867_9fa9_0c2099e4adc4.slice - libcontainer container kubepods-besteffort-poddd14feb0_ccbc_4867_9fa9_0c2099e4adc4.slice. Dec 16 13:05:37.199267 kubelet[3994]: I1216 13:05:37.121282 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd14feb0-ccbc-4867-9fa9-0c2099e4adc4-calico-apiserver-certs\") pod \"calico-apiserver-ddb857f6f-r577f\" (UID: \"dd14feb0-ccbc-4867-9fa9-0c2099e4adc4\") " pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" Dec 16 13:05:37.199267 kubelet[3994]: I1216 13:05:37.121315 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bv8k\" (UniqueName: \"kubernetes.io/projected/dd14feb0-ccbc-4867-9fa9-0c2099e4adc4-kube-api-access-8bv8k\") pod \"calico-apiserver-ddb857f6f-r577f\" (UID: \"dd14feb0-ccbc-4867-9fa9-0c2099e4adc4\") " pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" Dec 16 13:05:37.212230 containerd[2510]: time="2025-12-16T13:05:37.212180574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvkgt,Uid:2a3a8948-f27a-4f11-916b-8b42b855e619,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:37.457663 systemd[1]: Created slice kubepods-burstable-pod6ce8fb8e_6c4b_44f1_a6c2_6f2e0f1a3c02.slice - libcontainer container kubepods-burstable-pod6ce8fb8e_6c4b_44f1_a6c2_6f2e0f1a3c02.slice. Dec 16 13:05:37.500905 containerd[2510]: time="2025-12-16T13:05:37.500834156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-r577f,Uid:dd14feb0-ccbc-4867-9fa9-0c2099e4adc4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:37.524108 kubelet[3994]: I1216 13:05:37.524080 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02-config-volume\") pod \"coredns-674b8bbfcf-r88g8\" (UID: \"6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02\") " pod="kube-system/coredns-674b8bbfcf-r88g8" Dec 16 13:05:37.524210 kubelet[3994]: I1216 13:05:37.524118 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddctw\" (UniqueName: \"kubernetes.io/projected/6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02-kube-api-access-ddctw\") pod \"coredns-674b8bbfcf-r88g8\" (UID: \"6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02\") " pod="kube-system/coredns-674b8bbfcf-r88g8" Dec 16 13:05:37.673337 systemd[1]: Created slice kubepods-besteffort-pod38053ce8_04be_4cc4_9a23_d3c55f115a9e.slice - libcontainer container kubepods-besteffort-pod38053ce8_04be_4cc4_9a23_d3c55f115a9e.slice. Dec 16 13:05:37.686538 systemd[1]: Created slice kubepods-besteffort-pod17414321_ac90_43ed_affc_521db178bc15.slice - libcontainer container kubepods-besteffort-pod17414321_ac90_43ed_affc_521db178bc15.slice. Dec 16 13:05:37.703759 systemd[1]: Created slice kubepods-besteffort-podaa48803c_33bc_4da1_94e0_bc256a6f415a.slice - libcontainer container kubepods-besteffort-podaa48803c_33bc_4da1_94e0_bc256a6f415a.slice. Dec 16 13:05:37.716933 systemd[1]: Created slice kubepods-besteffort-pode585732d_c5ba_41d4_91da_20d86215882e.slice - libcontainer container kubepods-besteffort-pode585732d_c5ba_41d4_91da_20d86215882e.slice. Dec 16 13:05:37.727866 kubelet[3994]: I1216 13:05:37.726230 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2t9\" (UniqueName: \"kubernetes.io/projected/38053ce8-04be-4cc4-9a23-d3c55f115a9e-kube-api-access-bp2t9\") pod \"whisker-5795585bdf-wchgl\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " pod="calico-system/whisker-5795585bdf-wchgl" Dec 16 13:05:37.727866 kubelet[3994]: I1216 13:05:37.726273 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa48803c-33bc-4da1-94e0-bc256a6f415a-config\") pod \"goldmane-666569f655-2slmb\" (UID: \"aa48803c-33bc-4da1-94e0-bc256a6f415a\") " pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:37.727866 kubelet[3994]: I1216 13:05:37.726295 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aa48803c-33bc-4da1-94e0-bc256a6f415a-goldmane-key-pair\") pod \"goldmane-666569f655-2slmb\" (UID: \"aa48803c-33bc-4da1-94e0-bc256a6f415a\") " pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:37.727866 kubelet[3994]: I1216 13:05:37.726320 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17414321-ac90-43ed-affc-521db178bc15-tigera-ca-bundle\") pod \"calico-kube-controllers-76746db8cb-s8hqj\" (UID: \"17414321-ac90-43ed-affc-521db178bc15\") " pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" Dec 16 13:05:37.727866 kubelet[3994]: I1216 13:05:37.726344 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e585732d-c5ba-41d4-91da-20d86215882e-calico-apiserver-certs\") pod \"calico-apiserver-ddb857f6f-6zsvz\" (UID: \"e585732d-c5ba-41d4-91da-20d86215882e\") " pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" Dec 16 13:05:37.728267 kubelet[3994]: I1216 13:05:37.726368 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa48803c-33bc-4da1-94e0-bc256a6f415a-goldmane-ca-bundle\") pod \"goldmane-666569f655-2slmb\" (UID: \"aa48803c-33bc-4da1-94e0-bc256a6f415a\") " pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:37.728267 kubelet[3994]: I1216 13:05:37.726391 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxsb\" (UniqueName: \"kubernetes.io/projected/e585732d-c5ba-41d4-91da-20d86215882e-kube-api-access-6rxsb\") pod \"calico-apiserver-ddb857f6f-6zsvz\" (UID: \"e585732d-c5ba-41d4-91da-20d86215882e\") " pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" Dec 16 13:05:37.728267 kubelet[3994]: I1216 13:05:37.726415 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb74h\" (UniqueName: \"kubernetes.io/projected/aa48803c-33bc-4da1-94e0-bc256a6f415a-kube-api-access-hb74h\") pod \"goldmane-666569f655-2slmb\" (UID: \"aa48803c-33bc-4da1-94e0-bc256a6f415a\") " pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:37.728267 kubelet[3994]: I1216 13:05:37.726444 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-backend-key-pair\") pod \"whisker-5795585bdf-wchgl\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " pod="calico-system/whisker-5795585bdf-wchgl" Dec 16 13:05:37.728267 kubelet[3994]: I1216 13:05:37.726467 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6lh\" (UniqueName: \"kubernetes.io/projected/17414321-ac90-43ed-affc-521db178bc15-kube-api-access-fl6lh\") pod \"calico-kube-controllers-76746db8cb-s8hqj\" (UID: \"17414321-ac90-43ed-affc-521db178bc15\") " pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" Dec 16 13:05:37.728396 kubelet[3994]: I1216 13:05:37.726491 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-ca-bundle\") pod \"whisker-5795585bdf-wchgl\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " pod="calico-system/whisker-5795585bdf-wchgl" Dec 16 13:05:37.761872 containerd[2510]: time="2025-12-16T13:05:37.761688012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r88g8,Uid:6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:37.782766 containerd[2510]: time="2025-12-16T13:05:37.782178048Z" level=error msg="Failed to destroy network for sandbox \"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.784965 systemd[1]: run-netns-cni\x2d57d1d580\x2d2ca2\x2def95\x2d4fbb\x2de287e21600e0.mount: Deactivated successfully. Dec 16 13:05:37.792322 containerd[2510]: time="2025-12-16T13:05:37.792245973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc2zv,Uid:da16849a-2afd-49e7-91d5-6aafd4f3fe06,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.792825 kubelet[3994]: E1216 13:05:37.792700 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.792980 kubelet[3994]: E1216 13:05:37.792962 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:37.793178 kubelet[3994]: E1216 13:05:37.793101 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sc2zv" Dec 16 13:05:37.793269 kubelet[3994]: E1216 13:05:37.793249 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9256714b9cd4664b99c0fc1421ff1ed141f95f578515bb3154329bdee7add9a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:37.843038 containerd[2510]: time="2025-12-16T13:05:37.842986284Z" level=error msg="Failed to destroy network for sandbox \"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.853574 containerd[2510]: time="2025-12-16T13:05:37.853533018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvkgt,Uid:2a3a8948-f27a-4f11-916b-8b42b855e619,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.854860 kubelet[3994]: E1216 13:05:37.854599 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.856015 kubelet[3994]: E1216 13:05:37.855913 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bvkgt" Dec 16 13:05:37.856540 kubelet[3994]: E1216 13:05:37.856313 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bvkgt" Dec 16 13:05:37.856783 kubelet[3994]: E1216 13:05:37.856680 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bvkgt_kube-system(2a3a8948-f27a-4f11-916b-8b42b855e619)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bvkgt_kube-system(2a3a8948-f27a-4f11-916b-8b42b855e619)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4de7ba057d31cd5a6c2f8b03ec9b82ca86a9a991abd13162accc67127e52e34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bvkgt" podUID="2a3a8948-f27a-4f11-916b-8b42b855e619" Dec 16 13:05:37.869397 containerd[2510]: time="2025-12-16T13:05:37.869368495Z" level=error msg="Failed to destroy network for sandbox \"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.880098 containerd[2510]: time="2025-12-16T13:05:37.880067390Z" level=error msg="Failed to destroy network for sandbox \"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.881266 containerd[2510]: time="2025-12-16T13:05:37.881221034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-r577f,Uid:dd14feb0-ccbc-4867-9fa9-0c2099e4adc4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.881450 kubelet[3994]: E1216 13:05:37.881425 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.881495 kubelet[3994]: E1216 13:05:37.881473 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" Dec 16 13:05:37.881523 kubelet[3994]: E1216 13:05:37.881493 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" Dec 16 13:05:37.881567 kubelet[3994]: E1216 13:05:37.881543 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0942be7283648bced95f452300de5a5127852bd50367b2a31c19210c513529b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:05:37.891259 containerd[2510]: time="2025-12-16T13:05:37.891217560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r88g8,Uid:6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.891464 kubelet[3994]: E1216 13:05:37.891431 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:37.891521 kubelet[3994]: E1216 13:05:37.891483 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r88g8" Dec 16 13:05:37.891521 kubelet[3994]: E1216 13:05:37.891505 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r88g8" Dec 16 13:05:37.891573 kubelet[3994]: E1216 13:05:37.891551 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r88g8_kube-system(6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r88g8_kube-system(6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c77f7672e0399780d6c7fd995bff7246a1045ad61ca7f403dd7b31b39f9fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r88g8" podUID="6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02" Dec 16 13:05:37.912138 containerd[2510]: time="2025-12-16T13:05:37.912074159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:05:37.978159 containerd[2510]: time="2025-12-16T13:05:37.977947706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5795585bdf-wchgl,Uid:38053ce8-04be-4cc4-9a23-d3c55f115a9e,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:37.994393 containerd[2510]: time="2025-12-16T13:05:37.994250295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76746db8cb-s8hqj,Uid:17414321-ac90-43ed-affc-521db178bc15,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:38.024008 containerd[2510]: time="2025-12-16T13:05:38.023825758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-6zsvz,Uid:e585732d-c5ba-41d4-91da-20d86215882e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:38.024892 containerd[2510]: time="2025-12-16T13:05:38.024821485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2slmb,Uid:aa48803c-33bc-4da1-94e0-bc256a6f415a,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:38.048534 containerd[2510]: time="2025-12-16T13:05:38.048496223Z" level=error msg="Failed to destroy network for sandbox \"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.065086 containerd[2510]: time="2025-12-16T13:05:38.064984755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5795585bdf-wchgl,Uid:38053ce8-04be-4cc4-9a23-d3c55f115a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.065269 kubelet[3994]: E1216 13:05:38.065219 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.065322 kubelet[3994]: E1216 13:05:38.065294 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5795585bdf-wchgl" Dec 16 13:05:38.065868 kubelet[3994]: E1216 13:05:38.065490 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5795585bdf-wchgl" Dec 16 13:05:38.065868 kubelet[3994]: E1216 13:05:38.065565 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5795585bdf-wchgl_calico-system(38053ce8-04be-4cc4-9a23-d3c55f115a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5795585bdf-wchgl_calico-system(38053ce8-04be-4cc4-9a23-d3c55f115a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b216da49004f728c261752292c4f7f91fc9f135f503f681bf36cb1d7887960f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5795585bdf-wchgl" podUID="38053ce8-04be-4cc4-9a23-d3c55f115a9e" Dec 16 13:05:38.088992 containerd[2510]: time="2025-12-16T13:05:38.088963108Z" level=error msg="Failed to destroy network for sandbox \"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.096805 containerd[2510]: time="2025-12-16T13:05:38.096771201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76746db8cb-s8hqj,Uid:17414321-ac90-43ed-affc-521db178bc15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.097247 kubelet[3994]: E1216 13:05:38.097212 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.097876 kubelet[3994]: E1216 13:05:38.097589 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" Dec 16 13:05:38.097876 kubelet[3994]: E1216 13:05:38.097619 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" Dec 16 13:05:38.097876 kubelet[3994]: E1216 13:05:38.097674 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1971b237a9be2c7a990fa8ea492dc6a725ea8e6d795d79fdb31320742ffee84e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:05:38.115958 containerd[2510]: time="2025-12-16T13:05:38.115925940Z" level=error msg="Failed to destroy network for sandbox \"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.129115 containerd[2510]: time="2025-12-16T13:05:38.129085513Z" level=error msg="Failed to destroy network for sandbox \"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.130732 containerd[2510]: time="2025-12-16T13:05:38.130704174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-6zsvz,Uid:e585732d-c5ba-41d4-91da-20d86215882e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.131031 kubelet[3994]: E1216 13:05:38.131006 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.131098 kubelet[3994]: E1216 13:05:38.131050 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" Dec 16 13:05:38.131098 kubelet[3994]: E1216 13:05:38.131077 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" Dec 16 13:05:38.131153 kubelet[3994]: E1216 13:05:38.131126 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c3a6ee8f3421dc2f45f141d9d8d6d4ca224f60a625a680b3324783ff180d4de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:05:38.137402 containerd[2510]: time="2025-12-16T13:05:38.137361785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2slmb,Uid:aa48803c-33bc-4da1-94e0-bc256a6f415a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.137531 kubelet[3994]: E1216 13:05:38.137507 3994 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:05:38.137584 kubelet[3994]: E1216 13:05:38.137551 3994 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:38.137584 kubelet[3994]: E1216 13:05:38.137570 3994 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2slmb" Dec 16 13:05:38.137650 kubelet[3994]: E1216 13:05:38.137617 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed92965c8b6993e4b225fc6c985a4c74246439b0a5c704bc15f4cc08d105715f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:05:38.736432 systemd[1]: run-netns-cni\x2d4d0da031\x2d97f3\x2d207d\x2ddf69\x2d4fb6715fade3.mount: Deactivated successfully. Dec 16 13:05:38.736865 systemd[1]: run-netns-cni\x2d40a2e435\x2dad14\x2d420b\x2dd287\x2dff69584faf5d.mount: Deactivated successfully. Dec 16 13:05:38.736992 systemd[1]: run-netns-cni\x2db3fc06de\x2dc599\x2d0410\x2dfb41\x2d79fd41cae158.mount: Deactivated successfully. Dec 16 13:05:44.835988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489731759.mount: Deactivated successfully. Dec 16 13:05:44.881833 containerd[2510]: time="2025-12-16T13:05:44.881789981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.885162 containerd[2510]: time="2025-12-16T13:05:44.885122179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 13:05:44.888962 containerd[2510]: time="2025-12-16T13:05:44.888916487Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.897130 containerd[2510]: time="2025-12-16T13:05:44.897062708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.897659 containerd[2510]: time="2025-12-16T13:05:44.897409048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.98518337s" Dec 16 13:05:44.897659 containerd[2510]: time="2025-12-16T13:05:44.897440661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:05:44.913536 containerd[2510]: time="2025-12-16T13:05:44.913498302Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:05:44.941740 containerd[2510]: time="2025-12-16T13:05:44.941710206Z" level=info msg="Container 08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:44.973547 containerd[2510]: time="2025-12-16T13:05:44.973516190Z" level=info msg="CreateContainer within sandbox \"17b653deec7f383d5ff1e77eaa41fc2339ef6731a46dec5497b551e75d56d639\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a\"" Dec 16 13:05:44.976724 containerd[2510]: time="2025-12-16T13:05:44.976241641Z" level=info msg="StartContainer for \"08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a\"" Dec 16 13:05:44.977684 containerd[2510]: time="2025-12-16T13:05:44.977618826Z" level=info msg="connecting to shim 08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a" address="unix:///run/containerd/s/c0d2a381e251d388021580d6c7672f12e199c3d85677d88d6ff4884aaad2e308" protocol=ttrpc version=3 Dec 16 13:05:44.996048 systemd[1]: Started cri-containerd-08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a.scope - libcontainer container 08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a. Dec 16 13:05:45.037994 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 13:05:45.038250 kernel: audit: type=1334 audit(1765890345.035:585): prog-id=196 op=LOAD Dec 16 13:05:45.035000 audit: BPF prog-id=196 op=LOAD Dec 16 13:05:45.043780 kernel: audit: type=1300 audit(1765890345.035:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.035000 audit[5048]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.048928 kernel: audit: type=1327 audit(1765890345.035:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.051013 kernel: audit: type=1334 audit(1765890345.035:586): prog-id=197 op=LOAD Dec 16 13:05:45.035000 audit: BPF prog-id=197 op=LOAD Dec 16 13:05:45.035000 audit[5048]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.064129 kernel: audit: type=1300 audit(1765890345.035:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.064190 kernel: audit: type=1327 audit(1765890345.035:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.067331 kernel: audit: type=1334 audit(1765890345.035:587): prog-id=197 op=UNLOAD Dec 16 13:05:45.035000 audit: BPF prog-id=197 op=UNLOAD Dec 16 13:05:45.035000 audit[5048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.073374 kernel: audit: type=1300 audit(1765890345.035:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.078860 containerd[2510]: time="2025-12-16T13:05:45.078802878Z" level=info msg="StartContainer for \"08040296cac2b7ea8dafad5b4d675592aed2d290179e0f1424f3f7b14a00162a\" returns successfully" Dec 16 13:05:45.079513 kernel: audit: type=1327 audit(1765890345.035:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.035000 audit: BPF prog-id=196 op=UNLOAD Dec 16 13:05:45.035000 audit[5048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.035000 audit: BPF prog-id=198 op=LOAD Dec 16 13:05:45.035000 audit[5048]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4544 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:45.084903 kernel: audit: type=1334 audit(1765890345.035:588): prog-id=196 op=UNLOAD Dec 16 13:05:45.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038303430323936636163326237656138646166616435623464363735 Dec 16 13:05:45.332355 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:05:45.332472 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:05:45.577391 kubelet[3994]: I1216 13:05:45.577355 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-backend-key-pair\") pod \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " Dec 16 13:05:45.579539 kubelet[3994]: I1216 13:05:45.578934 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp2t9\" (UniqueName: \"kubernetes.io/projected/38053ce8-04be-4cc4-9a23-d3c55f115a9e-kube-api-access-bp2t9\") pod \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " Dec 16 13:05:45.579539 kubelet[3994]: I1216 13:05:45.578975 3994 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-ca-bundle\") pod \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\" (UID: \"38053ce8-04be-4cc4-9a23-d3c55f115a9e\") " Dec 16 13:05:45.579539 kubelet[3994]: I1216 13:05:45.579308 3994 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "38053ce8-04be-4cc4-9a23-d3c55f115a9e" (UID: "38053ce8-04be-4cc4-9a23-d3c55f115a9e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:05:45.583969 kubelet[3994]: I1216 13:05:45.583898 3994 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "38053ce8-04be-4cc4-9a23-d3c55f115a9e" (UID: "38053ce8-04be-4cc4-9a23-d3c55f115a9e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:05:45.584553 kubelet[3994]: I1216 13:05:45.584512 3994 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38053ce8-04be-4cc4-9a23-d3c55f115a9e-kube-api-access-bp2t9" (OuterVolumeSpecName: "kube-api-access-bp2t9") pod "38053ce8-04be-4cc4-9a23-d3c55f115a9e" (UID: "38053ce8-04be-4cc4-9a23-d3c55f115a9e"). InnerVolumeSpecName "kube-api-access-bp2t9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:05:45.679787 kubelet[3994]: I1216 13:05:45.679739 3994 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-968fde264e\" DevicePath \"\"" Dec 16 13:05:45.679787 kubelet[3994]: I1216 13:05:45.679782 3994 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bp2t9\" (UniqueName: \"kubernetes.io/projected/38053ce8-04be-4cc4-9a23-d3c55f115a9e-kube-api-access-bp2t9\") on node \"ci-4515.1.0-a-968fde264e\" DevicePath \"\"" Dec 16 13:05:45.679787 kubelet[3994]: I1216 13:05:45.679794 3994 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38053ce8-04be-4cc4-9a23-d3c55f115a9e-whisker-ca-bundle\") on node \"ci-4515.1.0-a-968fde264e\" DevicePath \"\"" Dec 16 13:05:45.827366 systemd[1]: Removed slice kubepods-besteffort-pod38053ce8_04be_4cc4_9a23_d3c55f115a9e.slice - libcontainer container kubepods-besteffort-pod38053ce8_04be_4cc4_9a23_d3c55f115a9e.slice. Dec 16 13:05:45.834996 systemd[1]: var-lib-kubelet-pods-38053ce8\x2d04be\x2d4cc4\x2d9a23\x2dd3c55f115a9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbp2t9.mount: Deactivated successfully. Dec 16 13:05:45.835103 systemd[1]: var-lib-kubelet-pods-38053ce8\x2d04be\x2d4cc4\x2d9a23\x2dd3c55f115a9e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:05:45.976884 kubelet[3994]: I1216 13:05:45.976120 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m5msp" podStartSLOduration=1.9906703239999999 podStartE2EDuration="20.97610208s" podCreationTimestamp="2025-12-16 13:05:25 +0000 UTC" firstStartedPulling="2025-12-16 13:05:25.912800564 +0000 UTC m=+20.201712836" lastFinishedPulling="2025-12-16 13:05:44.898232326 +0000 UTC m=+39.187144592" observedRunningTime="2025-12-16 13:05:45.957767512 +0000 UTC m=+40.246679792" watchObservedRunningTime="2025-12-16 13:05:45.97610208 +0000 UTC m=+40.265014346" Dec 16 13:05:46.061951 systemd[1]: Created slice kubepods-besteffort-pod78647b74_1322_40dc_8769_efb3043691d4.slice - libcontainer container kubepods-besteffort-pod78647b74_1322_40dc_8769_efb3043691d4.slice. Dec 16 13:05:46.185512 kubelet[3994]: I1216 13:05:46.185456 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78647b74-1322-40dc-8769-efb3043691d4-whisker-ca-bundle\") pod \"whisker-556dc6d57-6kbq4\" (UID: \"78647b74-1322-40dc-8769-efb3043691d4\") " pod="calico-system/whisker-556dc6d57-6kbq4" Dec 16 13:05:46.185512 kubelet[3994]: I1216 13:05:46.185501 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78647b74-1322-40dc-8769-efb3043691d4-whisker-backend-key-pair\") pod \"whisker-556dc6d57-6kbq4\" (UID: \"78647b74-1322-40dc-8769-efb3043691d4\") " pod="calico-system/whisker-556dc6d57-6kbq4" Dec 16 13:05:46.185719 kubelet[3994]: I1216 13:05:46.185526 3994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw2t\" (UniqueName: \"kubernetes.io/projected/78647b74-1322-40dc-8769-efb3043691d4-kube-api-access-cxw2t\") pod \"whisker-556dc6d57-6kbq4\" (UID: \"78647b74-1322-40dc-8769-efb3043691d4\") " pod="calico-system/whisker-556dc6d57-6kbq4" Dec 16 13:05:46.366175 containerd[2510]: time="2025-12-16T13:05:46.366134700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556dc6d57-6kbq4,Uid:78647b74-1322-40dc-8769-efb3043691d4,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:46.512206 systemd-networkd[2142]: cali7c93967be0c: Link UP Dec 16 13:05:46.512382 systemd-networkd[2142]: cali7c93967be0c: Gained carrier Dec 16 13:05:46.548630 containerd[2510]: 2025-12-16 13:05:46.398 [INFO][5133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:05:46.548630 containerd[2510]: 2025-12-16 13:05:46.406 [INFO][5133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0 whisker-556dc6d57- calico-system 78647b74-1322-40dc-8769-efb3043691d4 920 0 2025-12-16 13:05:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:556dc6d57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e whisker-556dc6d57-6kbq4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7c93967be0c [] [] }} ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-" Dec 16 13:05:46.548630 containerd[2510]: 2025-12-16 13:05:46.406 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.548630 containerd[2510]: 2025-12-16 13:05:46.427 [INFO][5145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" HandleID="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Workload="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.427 [INFO][5145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" HandleID="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Workload="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"whisker-556dc6d57-6kbq4", "timestamp":"2025-12-16 13:05:46.427429468 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.427 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.427 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.427 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.433 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.441 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.446 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.450 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.549767 containerd[2510]: 2025-12-16 13:05:46.456 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.456 [INFO][5145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.466 [INFO][5145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764 Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.477 [INFO][5145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.493 [INFO][5145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.493 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.493 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:46.550128 containerd[2510]: 2025-12-16 13:05:46.493 [INFO][5145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" HandleID="k8s-pod-network.36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Workload="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.550274 containerd[2510]: 2025-12-16 13:05:46.499 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0", GenerateName:"whisker-556dc6d57-", Namespace:"calico-system", SelfLink:"", UID:"78647b74-1322-40dc-8769-efb3043691d4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"556dc6d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"whisker-556dc6d57-6kbq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c93967be0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.550274 containerd[2510]: 2025-12-16 13:05:46.499 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.1/32] ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.550361 containerd[2510]: 2025-12-16 13:05:46.499 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c93967be0c ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.550361 containerd[2510]: 2025-12-16 13:05:46.512 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.550405 containerd[2510]: 2025-12-16 13:05:46.516 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0", GenerateName:"whisker-556dc6d57-", Namespace:"calico-system", SelfLink:"", UID:"78647b74-1322-40dc-8769-efb3043691d4", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"556dc6d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764", Pod:"whisker-556dc6d57-6kbq4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7c93967be0c", MAC:"6e:0f:97:2c:91:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:46.550981 containerd[2510]: 2025-12-16 13:05:46.546 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" Namespace="calico-system" Pod="whisker-556dc6d57-6kbq4" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-whisker--556dc6d57--6kbq4-eth0" Dec 16 13:05:46.612837 containerd[2510]: time="2025-12-16T13:05:46.612795602Z" level=info msg="connecting to shim 36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764" address="unix:///run/containerd/s/93e9b27a488e53e284821a8418b04c6f95bb8d06c31598e6c9b18e8435d9140c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:46.657099 systemd[1]: Started cri-containerd-36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764.scope - libcontainer container 36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764. Dec 16 13:05:46.672000 audit: BPF prog-id=199 op=LOAD Dec 16 13:05:46.673000 audit: BPF prog-id=200 op=LOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=200 op=UNLOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=201 op=LOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=202 op=LOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=202 op=UNLOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=201 op=UNLOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.673000 audit: BPF prog-id=203 op=LOAD Dec 16 13:05:46.673000 audit[5207]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5185 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:46.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336626336663464363161326635616363653134346361376133346638 Dec 16 13:05:46.768296 containerd[2510]: time="2025-12-16T13:05:46.768066073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556dc6d57-6kbq4,Uid:78647b74-1322-40dc-8769-efb3043691d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"36bc6f4d61a2f5acce144ca7a34f886cad49fa05ae8de882dd80824cd594c764\"" Dec 16 13:05:46.770482 containerd[2510]: time="2025-12-16T13:05:46.770452514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:05:46.991135 kubelet[3994]: I1216 13:05:46.991105 3994 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:47.024000 audit[5327]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=5327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.024000 audit[5327]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe03a15f0 a2=0 a3=7fffe03a15dc items=0 ppid=4104 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:47.031000 audit[5327]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=5327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.031000 audit[5327]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffe03a15f0 a2=0 a3=7fffe03a15dc items=0 ppid=4104 pid=5327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:47.041430 containerd[2510]: time="2025-12-16T13:05:47.041394257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:47.047693 containerd[2510]: time="2025-12-16T13:05:47.047659225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:05:47.047767 containerd[2510]: time="2025-12-16T13:05:47.047741869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:47.047919 kubelet[3994]: E1216 13:05:47.047888 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:47.047975 kubelet[3994]: E1216 13:05:47.047931 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:47.048128 kubelet[3994]: E1216 13:05:47.048081 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b02e6467f2cc48f990d97e006e40e793,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:47.050428 containerd[2510]: time="2025-12-16T13:05:47.050400530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:05:47.336797 containerd[2510]: time="2025-12-16T13:05:47.336649166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:47.347558 containerd[2510]: time="2025-12-16T13:05:47.347517492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:05:47.347636 containerd[2510]: time="2025-12-16T13:05:47.347602077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:47.347818 kubelet[3994]: E1216 13:05:47.347749 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:47.347818 kubelet[3994]: E1216 13:05:47.347805 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:47.348049 kubelet[3994]: E1216 13:05:47.347984 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:47.349188 kubelet[3994]: E1216 13:05:47.349132 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:05:47.824248 kubelet[3994]: I1216 13:05:47.824206 3994 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38053ce8-04be-4cc4-9a23-d3c55f115a9e" path="/var/lib/kubelet/pods/38053ce8-04be-4cc4-9a23-d3c55f115a9e/volumes" Dec 16 13:05:47.857997 systemd-networkd[2142]: cali7c93967be0c: Gained IPv6LL Dec 16 13:05:47.937289 kubelet[3994]: E1216 13:05:47.937185 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:05:47.986000 audit[5349]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.986000 audit[5349]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd37c8610 a2=0 a3=7ffdd37c85fc items=0 ppid=4104 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:47.991000 audit[5349]: NETFILTER_CFG table=nat:123 family=2 entries=14 op=nft_register_rule pid=5349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:47.991000 audit[5349]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd37c8610 a2=0 a3=0 items=0 ppid=4104 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:47.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:48.132000 audit: BPF prog-id=204 op=LOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0496d910 a2=98 a3=1fffffffffffffff items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.132000 audit: BPF prog-id=204 op=UNLOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0496d8e0 a3=0 items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.132000 audit: BPF prog-id=205 op=LOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0496d7f0 a2=94 a3=3 items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.132000 audit: BPF prog-id=205 op=UNLOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0496d7f0 a2=94 a3=3 items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.132000 audit: BPF prog-id=206 op=LOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0496d830 a2=94 a3=7ffd0496da10 items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.132000 audit: BPF prog-id=206 op=UNLOAD Dec 16 13:05:48.132000 audit[5386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0496d830 a2=94 a3=7ffd0496da10 items=0 ppid=5350 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 13:05:48.133000 audit: BPF prog-id=207 op=LOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff86a2e170 a2=98 a3=3 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.133000 audit: BPF prog-id=207 op=UNLOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff86a2e140 a3=0 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.133000 audit: BPF prog-id=208 op=LOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff86a2df60 a2=94 a3=54428f items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.133000 audit: BPF prog-id=208 op=UNLOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff86a2df60 a2=94 a3=54428f items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.133000 audit: BPF prog-id=209 op=LOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff86a2df90 a2=94 a3=2 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.133000 audit: BPF prog-id=209 op=UNLOAD Dec 16 13:05:48.133000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff86a2df90 a2=0 a3=2 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.133000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.256000 audit: BPF prog-id=210 op=LOAD Dec 16 13:05:48.256000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff86a2de50 a2=94 a3=1 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.256000 audit: BPF prog-id=210 op=UNLOAD Dec 16 13:05:48.256000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff86a2de50 a2=94 a3=1 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=211 op=LOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff86a2de40 a2=94 a3=4 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=211 op=UNLOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff86a2de40 a2=0 a3=4 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=212 op=LOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff86a2dca0 a2=94 a3=5 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=212 op=UNLOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff86a2dca0 a2=0 a3=5 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=213 op=LOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff86a2dec0 a2=94 a3=6 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=213 op=UNLOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff86a2dec0 a2=0 a3=6 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.266000 audit: BPF prog-id=214 op=LOAD Dec 16 13:05:48.266000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff86a2d670 a2=94 a3=88 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.266000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.267000 audit: BPF prog-id=215 op=LOAD Dec 16 13:05:48.267000 audit[5387]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff86a2d4f0 a2=94 a3=2 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.267000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.267000 audit: BPF prog-id=215 op=UNLOAD Dec 16 13:05:48.267000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff86a2d520 a2=0 a3=7fff86a2d620 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.267000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.267000 audit: BPF prog-id=214 op=UNLOAD Dec 16 13:05:48.267000 audit[5387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=95c2d10 a2=0 a3=98205209172a8b15 items=0 ppid=5350 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.267000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 13:05:48.274000 audit: BPF prog-id=216 op=LOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ba0dc0 a2=98 a3=1999999999999999 items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.274000 audit: BPF prog-id=216 op=UNLOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe34ba0d90 a3=0 items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.274000 audit: BPF prog-id=217 op=LOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ba0ca0 a2=94 a3=ffff items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.274000 audit: BPF prog-id=217 op=UNLOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe34ba0ca0 a2=94 a3=ffff items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.274000 audit: BPF prog-id=218 op=LOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ba0ce0 a2=94 a3=7ffe34ba0ec0 items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.274000 audit: BPF prog-id=218 op=UNLOAD Dec 16 13:05:48.274000 audit[5390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe34ba0ce0 a2=94 a3=7ffe34ba0ec0 items=0 ppid=5350 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.274000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 13:05:48.371000 audit: BPF prog-id=219 op=LOAD Dec 16 13:05:48.371000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5febf970 a2=98 a3=0 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.371000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.371000 audit: BPF prog-id=219 op=UNLOAD Dec 16 13:05:48.371000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc5febf940 a3=0 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.371000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=220 op=LOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5febf780 a2=94 a3=54428f items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=220 op=UNLOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc5febf780 a2=94 a3=54428f items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=221 op=LOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5febf7b0 a2=94 a3=2 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=221 op=UNLOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc5febf7b0 a2=0 a3=2 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=222 op=LOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5febf560 a2=94 a3=4 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=222 op=UNLOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5febf560 a2=94 a3=4 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=223 op=LOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5febf660 a2=94 a3=7ffc5febf7e0 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=223 op=UNLOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5febf660 a2=0 a3=7ffc5febf7e0 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.372000 audit: BPF prog-id=224 op=LOAD Dec 16 13:05:48.372000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5febed90 a2=94 a3=2 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.372000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.374000 audit: BPF prog-id=224 op=UNLOAD Dec 16 13:05:48.374000 audit[5415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5febed90 a2=0 a3=2 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.374000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.374000 audit: BPF prog-id=225 op=LOAD Dec 16 13:05:48.374000 audit[5415]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5febee90 a2=94 a3=30 items=0 ppid=5350 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.374000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 13:05:48.382000 audit: BPF prog-id=226 op=LOAD Dec 16 13:05:48.382000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7ca34780 a2=98 a3=0 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.382000 audit: BPF prog-id=226 op=UNLOAD Dec 16 13:05:48.382000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe7ca34750 a3=0 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.382000 audit: BPF prog-id=227 op=LOAD Dec 16 13:05:48.382000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7ca34570 a2=94 a3=54428f items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.382000 audit: BPF prog-id=227 op=UNLOAD Dec 16 13:05:48.382000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe7ca34570 a2=94 a3=54428f items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.382000 audit: BPF prog-id=228 op=LOAD Dec 16 13:05:48.382000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7ca345a0 a2=94 a3=2 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.382000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.383000 audit: BPF prog-id=228 op=UNLOAD Dec 16 13:05:48.383000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe7ca345a0 a2=0 a3=2 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.443494 systemd-networkd[2142]: vxlan.calico: Link UP Dec 16 13:05:48.443630 systemd-networkd[2142]: vxlan.calico: Gained carrier Dec 16 13:05:48.549000 audit: BPF prog-id=229 op=LOAD Dec 16 13:05:48.549000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7ca34460 a2=94 a3=1 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.549000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.549000 audit: BPF prog-id=229 op=UNLOAD Dec 16 13:05:48.549000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe7ca34460 a2=94 a3=1 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.549000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=230 op=LOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe7ca34450 a2=94 a3=4 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=230 op=UNLOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe7ca34450 a2=0 a3=4 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=231 op=LOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7ca342b0 a2=94 a3=5 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=231 op=UNLOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe7ca342b0 a2=0 a3=5 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=232 op=LOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe7ca344d0 a2=94 a3=6 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.558000 audit: BPF prog-id=232 op=UNLOAD Dec 16 13:05:48.558000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe7ca344d0 a2=0 a3=6 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.559000 audit: BPF prog-id=233 op=LOAD Dec 16 13:05:48.559000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe7ca33c80 a2=94 a3=88 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.559000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.559000 audit: BPF prog-id=234 op=LOAD Dec 16 13:05:48.559000 audit[5421]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe7ca33b00 a2=94 a3=2 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.559000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.559000 audit: BPF prog-id=234 op=UNLOAD Dec 16 13:05:48.559000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe7ca33b30 a2=0 a3=7ffe7ca33c30 items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.559000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.559000 audit: BPF prog-id=233 op=UNLOAD Dec 16 13:05:48.559000 audit[5421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2e0bdd10 a2=0 a3=6f74787b46777b8b items=0 ppid=5350 pid=5421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.559000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 13:05:48.568000 audit: BPF prog-id=225 op=UNLOAD Dec 16 13:05:48.568000 audit[5350]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000e46480 a2=0 a3=0 items=0 ppid=5164 pid=5350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.568000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 13:05:48.666000 audit[5446]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:48.666000 audit[5446]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffec8309b80 a2=0 a3=7ffec8309b6c items=0 ppid=5350 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.666000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:48.667000 audit[5450]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:48.667000 audit[5450]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdd6808850 a2=0 a3=7ffdd680883c items=0 ppid=5350 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.667000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:48.695000 audit[5449]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:48.695000 audit[5449]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff416dd9a0 a2=0 a3=7fff416dd98c items=0 ppid=5350 pid=5449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.695000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:48.714000 audit[5448]: NETFILTER_CFG table=filter:127 family=2 entries=94 op=nft_register_chain pid=5448 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:48.714000 audit[5448]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc64d8e890 a2=0 a3=564964608000 items=0 ppid=5350 pid=5448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:48.714000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:48.823154 containerd[2510]: time="2025-12-16T13:05:48.823111782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvkgt,Uid:2a3a8948-f27a-4f11-916b-8b42b855e619,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:48.823691 containerd[2510]: time="2025-12-16T13:05:48.823259207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r88g8,Uid:6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:48.823691 containerd[2510]: time="2025-12-16T13:05:48.823387974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-r577f,Uid:dd14feb0-ccbc-4867-9fa9-0c2099e4adc4,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:48.823691 containerd[2510]: time="2025-12-16T13:05:48.823117789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-6zsvz,Uid:e585732d-c5ba-41d4-91da-20d86215882e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:05:49.052338 systemd-networkd[2142]: cali0202faf388b: Link UP Dec 16 13:05:49.054875 systemd-networkd[2142]: cali0202faf388b: Gained carrier Dec 16 13:05:49.069717 containerd[2510]: 2025-12-16 13:05:48.969 [INFO][5471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0 coredns-674b8bbfcf- kube-system 6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02 852 0 2025-12-16 13:05:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e coredns-674b8bbfcf-r88g8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0202faf388b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-" Dec 16 13:05:49.069717 containerd[2510]: 2025-12-16 13:05:48.970 [INFO][5471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.069717 containerd[2510]: 2025-12-16 13:05:49.006 [INFO][5520] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" HandleID="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.006 [INFO][5520] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" HandleID="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"coredns-674b8bbfcf-r88g8", "timestamp":"2025-12-16 13:05:49.006036712 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.006 [INFO][5520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.006 [INFO][5520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.006 [INFO][5520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.015 [INFO][5520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.021 [INFO][5520] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.025 [INFO][5520] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.027 [INFO][5520] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070175 containerd[2510]: 2025-12-16 13:05:49.029 [INFO][5520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.029 [INFO][5520] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.030 [INFO][5520] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091 Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.036 [INFO][5520] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.046 [INFO][5520] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.046 [INFO][5520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.046 [INFO][5520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:49.070406 containerd[2510]: 2025-12-16 13:05:49.046 [INFO][5520] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" HandleID="k8s-pod-network.1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.048 [INFO][5471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"coredns-674b8bbfcf-r88g8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0202faf388b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.048 [INFO][5471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.2/32] ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.049 [INFO][5471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0202faf388b ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.056 [INFO][5471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.056 [INFO][5471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091", Pod:"coredns-674b8bbfcf-r88g8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0202faf388b", MAC:"72:ac:41:b0:d3:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.070596 containerd[2510]: 2025-12-16 13:05:49.067 [INFO][5471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" Namespace="kube-system" Pod="coredns-674b8bbfcf-r88g8" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--r88g8-eth0" Dec 16 13:05:49.082000 audit[5548]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:49.082000 audit[5548]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffcd181a3e0 a2=0 a3=7ffcd181a3cc items=0 ppid=5350 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.082000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:49.122597 containerd[2510]: time="2025-12-16T13:05:49.122537817Z" level=info msg="connecting to shim 1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091" address="unix:///run/containerd/s/9c41e03ca2a23f091dfc550f9f5753f1a37a824b2bc4d10b42dd6528282a8ce0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:49.149211 systemd[1]: Started cri-containerd-1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091.scope - libcontainer container 1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091. Dec 16 13:05:49.162928 systemd-networkd[2142]: cali49f0ebf95a4: Link UP Dec 16 13:05:49.164429 systemd-networkd[2142]: cali49f0ebf95a4: Gained carrier Dec 16 13:05:49.171000 audit: BPF prog-id=235 op=LOAD Dec 16 13:05:49.173000 audit: BPF prog-id=236 op=LOAD Dec 16 13:05:49.173000 audit[5571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.173000 audit: BPF prog-id=236 op=UNLOAD Dec 16 13:05:49.173000 audit[5571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.173000 audit: BPF prog-id=237 op=LOAD Dec 16 13:05:49.173000 audit[5571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.174000 audit: BPF prog-id=238 op=LOAD Dec 16 13:05:49.174000 audit[5571]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.174000 audit: BPF prog-id=238 op=UNLOAD Dec 16 13:05:49.174000 audit[5571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.174000 audit: BPF prog-id=237 op=UNLOAD Dec 16 13:05:49.174000 audit[5571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.174000 audit: BPF prog-id=239 op=LOAD Dec 16 13:05:49.174000 audit[5571]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5558 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136303932363764396161633265386466656633633639326236653264 Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:48.930 [INFO][5461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0 coredns-674b8bbfcf- kube-system 2a3a8948-f27a-4f11-916b-8b42b855e619 850 0 2025-12-16 13:05:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e coredns-674b8bbfcf-bvkgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49f0ebf95a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:48.931 [INFO][5461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.013 [INFO][5507] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" HandleID="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.016 [INFO][5507] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" HandleID="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"coredns-674b8bbfcf-bvkgt", "timestamp":"2025-12-16 13:05:49.0133362 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.016 [INFO][5507] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.046 [INFO][5507] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.047 [INFO][5507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.116 [INFO][5507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.126 [INFO][5507] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.130 [INFO][5507] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.132 [INFO][5507] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.135 [INFO][5507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.135 [INFO][5507] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.136 [INFO][5507] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985 Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.141 [INFO][5507] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.153 [INFO][5507] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.153 [INFO][5507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.153 [INFO][5507] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:49.193474 containerd[2510]: 2025-12-16 13:05:49.153 [INFO][5507] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" HandleID="k8s-pod-network.d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Workload="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.156 [INFO][5461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a3a8948-f27a-4f11-916b-8b42b855e619", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"coredns-674b8bbfcf-bvkgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49f0ebf95a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.156 [INFO][5461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.3/32] ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.156 [INFO][5461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49f0ebf95a4 ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.176 [INFO][5461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.176 [INFO][5461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a3a8948-f27a-4f11-916b-8b42b855e619", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985", Pod:"coredns-674b8bbfcf-bvkgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49f0ebf95a4", MAC:"7a:de:a2:45:3a:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.194006 containerd[2510]: 2025-12-16 13:05:49.190 [INFO][5461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" Namespace="kube-system" Pod="coredns-674b8bbfcf-bvkgt" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-coredns--674b8bbfcf--bvkgt-eth0" Dec 16 13:05:49.222000 audit[5605]: NETFILTER_CFG table=filter:129 family=2 entries=36 op=nft_register_chain pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:49.222000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=19156 a0=3 a1=7ffc76b30cf0 a2=0 a3=7ffc76b30cdc items=0 ppid=5350 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.222000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:49.230630 containerd[2510]: time="2025-12-16T13:05:49.230596097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r88g8,Uid:6ce8fb8e-6c4b-44f1-a6c2-6f2e0f1a3c02,Namespace:kube-system,Attempt:0,} returns sandbox id \"1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091\"" Dec 16 13:05:49.242355 containerd[2510]: time="2025-12-16T13:05:49.242243334Z" level=info msg="CreateContainer within sandbox \"1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:49.255675 containerd[2510]: time="2025-12-16T13:05:49.255637705Z" level=info msg="connecting to shim d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985" address="unix:///run/containerd/s/c9fdce5930179e10f3e941df830ff80387a7bdffbe0ce6626f58851fb3be8411" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:49.275054 systemd-networkd[2142]: cali05af5dfc502: Link UP Dec 16 13:05:49.276770 systemd-networkd[2142]: cali05af5dfc502: Gained carrier Dec 16 13:05:49.285829 containerd[2510]: time="2025-12-16T13:05:49.285574386Z" level=info msg="Container 4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:49.297058 systemd[1]: Started cri-containerd-d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985.scope - libcontainer container d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985. Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:48.963 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0 calico-apiserver-ddb857f6f- calico-apiserver e585732d-c5ba-41d4-91da-20d86215882e 856 0 2025-12-16 13:05:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddb857f6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e calico-apiserver-ddb857f6f-6zsvz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05af5dfc502 [] [] }} ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:48.965 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.035 [INFO][5517] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" HandleID="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.036 [INFO][5517] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" HandleID="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-968fde264e", "pod":"calico-apiserver-ddb857f6f-6zsvz", "timestamp":"2025-12-16 13:05:49.035014751 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.036 [INFO][5517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.153 [INFO][5517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.155 [INFO][5517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.217 [INFO][5517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.232 [INFO][5517] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.239 [INFO][5517] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.241 [INFO][5517] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.243 [INFO][5517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.243 [INFO][5517] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.245 [INFO][5517] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36 Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.251 [INFO][5517] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.259 [INFO][5517] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.259 [INFO][5517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.260 [INFO][5517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:49.309819 containerd[2510]: 2025-12-16 13:05:49.261 [INFO][5517] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" HandleID="k8s-pod-network.e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.265 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0", GenerateName:"calico-apiserver-ddb857f6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e585732d-c5ba-41d4-91da-20d86215882e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddb857f6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"calico-apiserver-ddb857f6f-6zsvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05af5dfc502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.266 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.4/32] ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.266 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05af5dfc502 ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.283 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.283 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0", GenerateName:"calico-apiserver-ddb857f6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e585732d-c5ba-41d4-91da-20d86215882e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddb857f6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36", Pod:"calico-apiserver-ddb857f6f-6zsvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05af5dfc502", MAC:"6a:61:fc:03:6d:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.310405 containerd[2510]: 2025-12-16 13:05:49.307 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-6zsvz" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--6zsvz-eth0" Dec 16 13:05:49.315000 audit: BPF prog-id=240 op=LOAD Dec 16 13:05:49.316000 audit: BPF prog-id=241 op=LOAD Dec 16 13:05:49.316000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.317000 audit: BPF prog-id=241 op=UNLOAD Dec 16 13:05:49.317000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.318000 audit: BPF prog-id=242 op=LOAD Dec 16 13:05:49.318000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.318000 audit: BPF prog-id=243 op=LOAD Dec 16 13:05:49.318000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.318000 audit: BPF prog-id=243 op=UNLOAD Dec 16 13:05:49.318000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.318000 audit: BPF prog-id=242 op=UNLOAD Dec 16 13:05:49.318000 audit[5628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.318000 audit: BPF prog-id=244 op=LOAD Dec 16 13:05:49.318000 audit[5628]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5615 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333865656336656533636137333838323361656165316338343234 Dec 16 13:05:49.320461 containerd[2510]: time="2025-12-16T13:05:49.318832931Z" level=info msg="CreateContainer within sandbox \"1609267d9aac2e8dfef3c692b6e2d3762fb7940edcabe8738007b09a4a7d7091\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903\"" Dec 16 13:05:49.321880 containerd[2510]: time="2025-12-16T13:05:49.321377364Z" level=info msg="StartContainer for \"4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903\"" Dec 16 13:05:49.322743 containerd[2510]: time="2025-12-16T13:05:49.322700452Z" level=info msg="connecting to shim 4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903" address="unix:///run/containerd/s/9c41e03ca2a23f091dfc550f9f5753f1a37a824b2bc4d10b42dd6528282a8ce0" protocol=ttrpc version=3 Dec 16 13:05:49.347186 systemd[1]: Started cri-containerd-4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903.scope - libcontainer container 4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903. Dec 16 13:05:49.374000 audit: BPF prog-id=245 op=LOAD Dec 16 13:05:49.376000 audit: BPF prog-id=246 op=LOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=246 op=UNLOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=247 op=LOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=248 op=LOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=248 op=UNLOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=247 op=UNLOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.376000 audit: BPF prog-id=249 op=LOAD Dec 16 13:05:49.376000 audit[5653]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5558 pid=5653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313965643665623333356339373535313965653161653263353661 Dec 16 13:05:49.391358 containerd[2510]: time="2025-12-16T13:05:49.391286575Z" level=info msg="connecting to shim e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36" address="unix:///run/containerd/s/75c55e24a924cae5e21447efcda219b154e32ce60afe7f2dc1b2fe4f560e2b0c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:49.391607 containerd[2510]: time="2025-12-16T13:05:49.391576345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bvkgt,Uid:2a3a8948-f27a-4f11-916b-8b42b855e619,Namespace:kube-system,Attempt:0,} returns sandbox id \"d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985\"" Dec 16 13:05:49.411167 systemd-networkd[2142]: calib4196a88cb6: Link UP Dec 16 13:05:49.410000 audit[5698]: NETFILTER_CFG table=filter:130 family=2 entries=64 op=nft_register_chain pid=5698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:49.410000 audit[5698]: SYSCALL arch=c000003e syscall=46 success=yes exit=33436 a0=3 a1=7ffea38bf410 a2=0 a3=7ffea38bf3fc items=0 ppid=5350 pid=5698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.410000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:49.419156 containerd[2510]: time="2025-12-16T13:05:49.417004569Z" level=info msg="CreateContainer within sandbox \"d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:49.417778 systemd-networkd[2142]: calib4196a88cb6: Gained carrier Dec 16 13:05:49.439389 containerd[2510]: time="2025-12-16T13:05:49.439321992Z" level=info msg="StartContainer for \"4319ed6eb335c975519ee1ae2c56aa4ff27eb371a81e92d3c8c1c73268713903\" returns successfully" Dec 16 13:05:49.446024 containerd[2510]: time="2025-12-16T13:05:49.445978496Z" level=info msg="Container 84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:49.449097 systemd[1]: Started cri-containerd-e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36.scope - libcontainer container e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36. Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:48.969 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0 calico-apiserver-ddb857f6f- calico-apiserver dd14feb0-ccbc-4867-9fa9-0c2099e4adc4 851 0 2025-12-16 13:05:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddb857f6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e calico-apiserver-ddb857f6f-r577f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4196a88cb6 [] [] }} ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:48.970 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.038 [INFO][5524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" HandleID="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.040 [INFO][5524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" HandleID="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000165a10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-968fde264e", "pod":"calico-apiserver-ddb857f6f-r577f", "timestamp":"2025-12-16 13:05:49.038443096 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.040 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.259 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.260 [INFO][5524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.316 [INFO][5524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.329 [INFO][5524] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.345 [INFO][5524] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.350 [INFO][5524] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.355 [INFO][5524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.355 [INFO][5524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.357 [INFO][5524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566 Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.368 [INFO][5524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.381 [INFO][5524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.5/26] block=192.168.32.0/26 handle="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.381 [INFO][5524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.5/26] handle="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.381 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:49.457350 containerd[2510]: 2025-12-16 13:05:49.381 [INFO][5524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.5/26] IPv6=[] ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" HandleID="k8s-pod-network.e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.383 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0", GenerateName:"calico-apiserver-ddb857f6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd14feb0-ccbc-4867-9fa9-0c2099e4adc4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddb857f6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"calico-apiserver-ddb857f6f-r577f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4196a88cb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.384 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.5/32] ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.384 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4196a88cb6 ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.422 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.422 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0", GenerateName:"calico-apiserver-ddb857f6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd14feb0-ccbc-4867-9fa9-0c2099e4adc4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddb857f6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566", Pod:"calico-apiserver-ddb857f6f-r577f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4196a88cb6", MAC:"36:bb:0f:ac:c4:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:49.458880 containerd[2510]: 2025-12-16 13:05:49.453 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" Namespace="calico-apiserver" Pod="calico-apiserver-ddb857f6f-r577f" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--apiserver--ddb857f6f--r577f-eth0" Dec 16 13:05:49.465665 containerd[2510]: time="2025-12-16T13:05:49.465630839Z" level=info msg="CreateContainer within sandbox \"d238eec6ee3ca738823aeae1c8424cce0207b1b5816f7796b98204e355cf2985\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c\"" Dec 16 13:05:49.466997 containerd[2510]: time="2025-12-16T13:05:49.466975326Z" level=info msg="StartContainer for \"84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c\"" Dec 16 13:05:49.468004 containerd[2510]: time="2025-12-16T13:05:49.467943620Z" level=info msg="connecting to shim 84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c" address="unix:///run/containerd/s/c9fdce5930179e10f3e941df830ff80387a7bdffbe0ce6626f58851fb3be8411" protocol=ttrpc version=3 Dec 16 13:05:49.488000 audit: BPF prog-id=250 op=LOAD Dec 16 13:05:49.494000 audit: BPF prog-id=251 op=LOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=251 op=UNLOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=252 op=LOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=253 op=LOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=253 op=UNLOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=252 op=UNLOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.494000 audit: BPF prog-id=254 op=LOAD Dec 16 13:05:49.494000 audit[5705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=5689 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535363964376566633730353064303831313138373565626238636331 Dec 16 13:05:49.495062 systemd[1]: Started cri-containerd-84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c.scope - libcontainer container 84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c. Dec 16 13:05:49.516767 containerd[2510]: time="2025-12-16T13:05:49.516735126Z" level=info msg="connecting to shim e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566" address="unix:///run/containerd/s/aa579a02b663537f75eaa0b896b023f17e6caf212298b3500c045dfb0e606ac1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:49.528000 audit: BPF prog-id=255 op=LOAD Dec 16 13:05:49.529000 audit: BPF prog-id=256 op=LOAD Dec 16 13:05:49.529000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.529000 audit: BPF prog-id=256 op=UNLOAD Dec 16 13:05:49.529000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.530000 audit: BPF prog-id=257 op=LOAD Dec 16 13:05:49.530000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.530000 audit: BPF prog-id=258 op=LOAD Dec 16 13:05:49.530000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.530000 audit: BPF prog-id=258 op=UNLOAD Dec 16 13:05:49.530000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.531000 audit: BPF prog-id=257 op=UNLOAD Dec 16 13:05:49.531000 audit[5733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.531000 audit: BPF prog-id=259 op=LOAD Dec 16 13:05:49.531000 audit[5733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5615 pid=5733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834633961303962323463363033323334653739653736396630646464 Dec 16 13:05:49.552205 systemd[1]: Started cri-containerd-e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566.scope - libcontainer container e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566. Dec 16 13:05:49.568445 containerd[2510]: time="2025-12-16T13:05:49.568218035Z" level=info msg="StartContainer for \"84c9a09b24c603234e79e769f0ddd49826826cd6a338cd1a7c7ec47b7d11278c\" returns successfully" Dec 16 13:05:49.577945 containerd[2510]: time="2025-12-16T13:05:49.577919494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-6zsvz,Uid:e585732d-c5ba-41d4-91da-20d86215882e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e569d7efc7050d08111875ebb8cc1caea228148dff5a3d014f737bb4f43f9f36\"" Dec 16 13:05:49.580689 containerd[2510]: time="2025-12-16T13:05:49.580663307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:05:49.608000 audit[5816]: NETFILTER_CFG table=filter:131 family=2 entries=45 op=nft_register_chain pid=5816 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:49.608000 audit[5816]: SYSCALL arch=c000003e syscall=46 success=yes exit=24248 a0=3 a1=7ffe5f3d0350 a2=0 a3=7ffe5f3d033c items=0 ppid=5350 pid=5816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:49.612000 audit: BPF prog-id=260 op=LOAD Dec 16 13:05:49.614000 audit: BPF prog-id=261 op=LOAD Dec 16 13:05:49.614000 audit[5778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.614000 audit: BPF prog-id=261 op=UNLOAD Dec 16 13:05:49.614000 audit[5778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.615000 audit: BPF prog-id=262 op=LOAD Dec 16 13:05:49.615000 audit[5778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.615000 audit: BPF prog-id=263 op=LOAD Dec 16 13:05:49.615000 audit[5778]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.615000 audit: BPF prog-id=263 op=UNLOAD Dec 16 13:05:49.615000 audit[5778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.615000 audit: BPF prog-id=262 op=UNLOAD Dec 16 13:05:49.615000 audit[5778]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.615000 audit: BPF prog-id=264 op=LOAD Dec 16 13:05:49.615000 audit[5778]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5767 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:49.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530306232336237383461343764306264353165663535633536633366 Dec 16 13:05:49.661822 containerd[2510]: time="2025-12-16T13:05:49.661784556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddb857f6f-r577f,Uid:dd14feb0-ccbc-4867-9fa9-0c2099e4adc4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e00b23b784a47d0bd51ef55c56c3f74bb7bec586376323dbec7988cc62a4c566\"" Dec 16 13:05:49.866225 containerd[2510]: time="2025-12-16T13:05:49.866098568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:49.869675 containerd[2510]: time="2025-12-16T13:05:49.869626348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:05:49.869796 containerd[2510]: time="2025-12-16T13:05:49.869642737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:49.869926 kubelet[3994]: E1216 13:05:49.869877 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:49.870168 kubelet[3994]: E1216 13:05:49.869945 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:49.870292 containerd[2510]: time="2025-12-16T13:05:49.870226185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:05:49.870530 kubelet[3994]: E1216 13:05:49.870474 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rxsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:49.871958 kubelet[3994]: E1216 13:05:49.871916 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:05:49.947113 kubelet[3994]: E1216 13:05:49.946921 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:05:50.000000 audit[5830]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.000000 audit[5830]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd0c0506d0 a2=0 a3=7ffd0c0506bc items=0 ppid=4104 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.007000 audit[5830]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.007000 audit[5830]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd0c0506d0 a2=0 a3=0 items=0 ppid=4104 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.018341 kubelet[3994]: I1216 13:05:50.017457 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bvkgt" podStartSLOduration=40.017440036 podStartE2EDuration="40.017440036s" podCreationTimestamp="2025-12-16 13:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:49.988913186 +0000 UTC m=+44.277825471" watchObservedRunningTime="2025-12-16 13:05:50.017440036 +0000 UTC m=+44.306352318" Dec 16 13:05:50.034000 audit[5832]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.034000 audit[5832]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd5d73d860 a2=0 a3=7ffd5d73d84c items=0 ppid=4104 pid=5832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.038000 audit[5832]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.040566 kernel: kauditd_printk_skb: 390 callbacks suppressed Dec 16 13:05:50.040638 kernel: audit: type=1325 audit(1765890350.038:723): table=nat:135 family=2 entries=35 op=nft_register_chain pid=5832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:50.038000 audit[5832]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd5d73d860 a2=0 a3=7ffd5d73d84c items=0 ppid=4104 pid=5832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.048076 kernel: audit: type=1300 audit(1765890350.038:723): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd5d73d860 a2=0 a3=7ffd5d73d84c items=0 ppid=4104 pid=5832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:50.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.051253 kernel: audit: type=1327 audit(1765890350.038:723): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:50.141134 containerd[2510]: time="2025-12-16T13:05:50.141010047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:50.145253 containerd[2510]: time="2025-12-16T13:05:50.145209262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:05:50.145369 containerd[2510]: time="2025-12-16T13:05:50.145320763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:50.145523 kubelet[3994]: E1216 13:05:50.145477 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:50.145576 kubelet[3994]: E1216 13:05:50.145536 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:05:50.145758 kubelet[3994]: E1216 13:05:50.145727 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bv8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:50.147182 kubelet[3994]: E1216 13:05:50.147135 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:05:50.225991 systemd-networkd[2142]: vxlan.calico: Gained IPv6LL Dec 16 13:05:50.546351 systemd-networkd[2142]: cali49f0ebf95a4: Gained IPv6LL Dec 16 13:05:50.609977 systemd-networkd[2142]: calib4196a88cb6: Gained IPv6LL Dec 16 13:05:50.957462 kubelet[3994]: E1216 13:05:50.956969 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:05:50.957462 kubelet[3994]: E1216 13:05:50.957058 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:05:50.970907 kubelet[3994]: I1216 13:05:50.969569 3994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r88g8" podStartSLOduration=40.969551078 podStartE2EDuration="40.969551078s" podCreationTimestamp="2025-12-16 13:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:50.01868184 +0000 UTC m=+44.307594146" watchObservedRunningTime="2025-12-16 13:05:50.969551078 +0000 UTC m=+45.258463359" Dec 16 13:05:50.994118 systemd-networkd[2142]: cali0202faf388b: Gained IPv6LL Dec 16 13:05:51.064000 audit[5834]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:51.064000 audit[5834]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec4af6900 a2=0 a3=7ffec4af68ec items=0 ppid=4104 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:51.072191 kernel: audit: type=1325 audit(1765890351.064:724): table=filter:136 family=2 entries=14 op=nft_register_rule pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:51.072237 kernel: audit: type=1300 audit(1765890351.064:724): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec4af6900 a2=0 a3=7ffec4af68ec items=0 ppid=4104 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:51.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:51.075586 kernel: audit: type=1327 audit(1765890351.064:724): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:51.096000 audit[5834]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:51.096000 audit[5834]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffec4af6900 a2=0 a3=7ffec4af68ec items=0 ppid=4104 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:51.103805 kernel: audit: type=1325 audit(1765890351.096:725): table=nat:137 family=2 entries=56 op=nft_register_chain pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:51.105712 kernel: audit: type=1300 audit(1765890351.096:725): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffec4af6900 a2=0 a3=7ffec4af68ec items=0 ppid=4104 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:51.105746 kernel: audit: type=1327 audit(1765890351.096:725): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:51.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:51.314001 systemd-networkd[2142]: cali05af5dfc502: Gained IPv6LL Dec 16 13:05:51.825877 containerd[2510]: time="2025-12-16T13:05:51.824828558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76746db8cb-s8hqj,Uid:17414321-ac90-43ed-affc-521db178bc15,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:51.947595 systemd-networkd[2142]: cali7ed80a4e8c6: Link UP Dec 16 13:05:51.948729 systemd-networkd[2142]: cali7ed80a4e8c6: Gained carrier Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.890 [INFO][5837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0 calico-kube-controllers-76746db8cb- calico-system 17414321-ac90-43ed-affc-521db178bc15 854 0 2025-12-16 13:05:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76746db8cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e calico-kube-controllers-76746db8cb-s8hqj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7ed80a4e8c6 [] [] }} ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.890 [INFO][5837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.911 [INFO][5848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" HandleID="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.912 [INFO][5848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" HandleID="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"calico-kube-controllers-76746db8cb-s8hqj", "timestamp":"2025-12-16 13:05:51.91185711 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.912 [INFO][5848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.912 [INFO][5848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.912 [INFO][5848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.916 [INFO][5848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.919 [INFO][5848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.922 [INFO][5848] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.923 [INFO][5848] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.925 [INFO][5848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.925 [INFO][5848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.926 [INFO][5848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180 Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.932 [INFO][5848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.943 [INFO][5848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.6/26] block=192.168.32.0/26 handle="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.943 [INFO][5848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.6/26] handle="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.943 [INFO][5848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:51.962006 containerd[2510]: 2025-12-16 13:05:51.943 [INFO][5848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.6/26] IPv6=[] ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" HandleID="k8s-pod-network.7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Workload="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.945 [INFO][5837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0", GenerateName:"calico-kube-controllers-76746db8cb-", Namespace:"calico-system", SelfLink:"", UID:"17414321-ac90-43ed-affc-521db178bc15", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76746db8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"calico-kube-controllers-76746db8cb-s8hqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ed80a4e8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.945 [INFO][5837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.6/32] ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.945 [INFO][5837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ed80a4e8c6 ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.949 [INFO][5837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.949 [INFO][5837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0", GenerateName:"calico-kube-controllers-76746db8cb-", Namespace:"calico-system", SelfLink:"", UID:"17414321-ac90-43ed-affc-521db178bc15", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76746db8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180", Pod:"calico-kube-controllers-76746db8cb-s8hqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ed80a4e8c6", MAC:"ea:6c:e9:31:76:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:51.962755 containerd[2510]: 2025-12-16 13:05:51.959 [INFO][5837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" Namespace="calico-system" Pod="calico-kube-controllers-76746db8cb-s8hqj" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-calico--kube--controllers--76746db8cb--s8hqj-eth0" Dec 16 13:05:51.977000 audit[5863]: NETFILTER_CFG table=filter:138 family=2 entries=54 op=nft_register_chain pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:51.977000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=25976 a0=3 a1=7ffc40e510d0 a2=0 a3=7ffc40e510bc items=0 ppid=5350 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:51.981930 kernel: audit: type=1325 audit(1765890351.977:726): table=filter:138 family=2 entries=54 op=nft_register_chain pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:51.977000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:52.025894 containerd[2510]: time="2025-12-16T13:05:52.025227879Z" level=info msg="connecting to shim 7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180" address="unix:///run/containerd/s/4a17b4eebe5eaa83cdba547e50f6c90430e79ba2401ecf05af7aa37b4f1a1606" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:52.049039 systemd[1]: Started cri-containerd-7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180.scope - libcontainer container 7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180. Dec 16 13:05:52.057000 audit: BPF prog-id=265 op=LOAD Dec 16 13:05:52.058000 audit: BPF prog-id=266 op=LOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=266 op=UNLOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=267 op=LOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=268 op=LOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=268 op=UNLOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=267 op=UNLOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.058000 audit: BPF prog-id=269 op=LOAD Dec 16 13:05:52.058000 audit[5884]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5873 pid=5884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:52.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766353536643630313131626635363838303661656466633938393163 Dec 16 13:05:52.091881 containerd[2510]: time="2025-12-16T13:05:52.091597331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76746db8cb-s8hqj,Uid:17414321-ac90-43ed-affc-521db178bc15,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f556d60111bf568806aedfc9891cc657173fccbc4f59e693aef927dc70ff180\"" Dec 16 13:05:52.093938 containerd[2510]: time="2025-12-16T13:05:52.093610416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:05:52.372414 containerd[2510]: time="2025-12-16T13:05:52.372281616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:52.375602 containerd[2510]: time="2025-12-16T13:05:52.375550545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:05:52.375602 containerd[2510]: time="2025-12-16T13:05:52.375583419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:52.375825 kubelet[3994]: E1216 13:05:52.375791 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:05:52.376490 kubelet[3994]: E1216 13:05:52.375856 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:05:52.376490 kubelet[3994]: E1216 13:05:52.376033 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:52.377587 kubelet[3994]: E1216 13:05:52.377555 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:05:52.823489 containerd[2510]: time="2025-12-16T13:05:52.823387704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2slmb,Uid:aa48803c-33bc-4da1-94e0-bc256a6f415a,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:52.823489 containerd[2510]: time="2025-12-16T13:05:52.823402490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc2zv,Uid:da16849a-2afd-49e7-91d5-6aafd4f3fe06,Namespace:calico-system,Attempt:0,}" Dec 16 13:05:52.958816 systemd-networkd[2142]: calic3f60cd9b28: Link UP Dec 16 13:05:52.960574 systemd-networkd[2142]: calic3f60cd9b28: Gained carrier Dec 16 13:05:52.973932 kubelet[3994]: E1216 13:05:52.973893 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.882 [INFO][5915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0 csi-node-driver- calico-system da16849a-2afd-49e7-91d5-6aafd4f3fe06 739 0 2025-12-16 13:05:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e csi-node-driver-sc2zv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic3f60cd9b28 [] [] }} ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.882 [INFO][5915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.913 [INFO][5937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" HandleID="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Workload="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.913 [INFO][5937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" HandleID="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Workload="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"csi-node-driver-sc2zv", "timestamp":"2025-12-16 13:05:52.913344515 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.914 [INFO][5937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.914 [INFO][5937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.914 [INFO][5937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.918 [INFO][5937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.922 [INFO][5937] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.924 [INFO][5937] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.926 [INFO][5937] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.927 [INFO][5937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.927 [INFO][5937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.928 [INFO][5937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.935 [INFO][5937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.7/26] block=192.168.32.0/26 handle="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.7/26] handle="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:52.983862 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.7/26] IPv6=[] ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" HandleID="k8s-pod-network.e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Workload="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.948 [INFO][5915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da16849a-2afd-49e7-91d5-6aafd4f3fe06", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"csi-node-driver-sc2zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3f60cd9b28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.948 [INFO][5915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.7/32] ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.948 [INFO][5915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3f60cd9b28 ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.961 [INFO][5915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.962 [INFO][5915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da16849a-2afd-49e7-91d5-6aafd4f3fe06", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc", Pod:"csi-node-driver-sc2zv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3f60cd9b28", MAC:"d2:4c:f8:7e:b9:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:52.984592 containerd[2510]: 2025-12-16 13:05:52.980 [INFO][5915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" Namespace="calico-system" Pod="csi-node-driver-sc2zv" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-csi--node--driver--sc2zv-eth0" Dec 16 13:05:53.008000 audit[5960]: NETFILTER_CFG table=filter:139 family=2 entries=48 op=nft_register_chain pid=5960 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:53.008000 audit[5960]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7ffc52a7ba40 a2=0 a3=7ffc52a7ba2c items=0 ppid=5350 pid=5960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:53.045419 containerd[2510]: time="2025-12-16T13:05:53.045065326Z" level=info msg="connecting to shim e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc" address="unix:///run/containerd/s/53d42389fe1395b1aaca8446ea23cfcc63a3075367a2733d187bb3660f57eb0b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:53.081097 systemd[1]: Started cri-containerd-e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc.scope - libcontainer container e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc. Dec 16 13:05:53.097994 systemd-networkd[2142]: cali0a5b2e88028: Link UP Dec 16 13:05:53.098894 systemd-networkd[2142]: cali0a5b2e88028: Gained carrier Dec 16 13:05:53.113000 audit: BPF prog-id=270 op=LOAD Dec 16 13:05:53.113000 audit: BPF prog-id=271 op=LOAD Dec 16 13:05:53.113000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=271 op=UNLOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=272 op=LOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=273 op=LOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=273 op=UNLOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=272 op=UNLOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.114000 audit: BPF prog-id=274 op=LOAD Dec 16 13:05:53.114000 audit[5980]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=5969 pid=5980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539356537323635633061336636616464323562646438616238386537 Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.883 [INFO][5911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0 goldmane-666569f655- calico-system aa48803c-33bc-4da1-94e0-bc256a6f415a 855 0 2025-12-16 13:05:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-968fde264e goldmane-666569f655-2slmb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0a5b2e88028 [] [] }} ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.883 [INFO][5911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.914 [INFO][5938] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" HandleID="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Workload="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.915 [INFO][5938] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" HandleID="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Workload="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-968fde264e", "pod":"goldmane-666569f655-2slmb", "timestamp":"2025-12-16 13:05:52.914935083 +0000 UTC"}, Hostname:"ci-4515.1.0-a-968fde264e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.915 [INFO][5938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:52.946 [INFO][5938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-968fde264e' Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.019 [INFO][5938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.023 [INFO][5938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.027 [INFO][5938] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.034 [INFO][5938] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.039 [INFO][5938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.039 [INFO][5938] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.041 [INFO][5938] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9 Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.049 [INFO][5938] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.067 [INFO][5938] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.8/26] block=192.168.32.0/26 handle="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.068 [INFO][5938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.8/26] handle="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" host="ci-4515.1.0-a-968fde264e" Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.070 [INFO][5938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:05:53.128062 containerd[2510]: 2025-12-16 13:05:53.070 [INFO][5938] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.8/26] IPv6=[] ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" HandleID="k8s-pod-network.7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Workload="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.076 [INFO][5911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa48803c-33bc-4da1-94e0-bc256a6f415a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"", Pod:"goldmane-666569f655-2slmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5b2e88028", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.077 [INFO][5911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.8/32] ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.077 [INFO][5911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a5b2e88028 ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.100 [INFO][5911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.101 [INFO][5911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"aa48803c-33bc-4da1-94e0-bc256a6f415a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-968fde264e", ContainerID:"7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9", Pod:"goldmane-666569f655-2slmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a5b2e88028", MAC:"16:8a:18:ed:d1:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:05:53.129431 containerd[2510]: 2025-12-16 13:05:53.122 [INFO][5911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" Namespace="calico-system" Pod="goldmane-666569f655-2slmb" WorkloadEndpoint="ci--4515.1.0--a--968fde264e-k8s-goldmane--666569f655--2slmb-eth0" Dec 16 13:05:53.146000 audit[6014]: NETFILTER_CFG table=filter:140 family=2 entries=48 op=nft_register_chain pid=6014 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 13:05:53.146000 audit[6014]: SYSCALL arch=c000003e syscall=46 success=yes exit=26388 a0=3 a1=7ffcc5a83030 a2=0 a3=7ffcc5a8301c items=0 ppid=5350 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.146000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 13:05:53.148951 containerd[2510]: time="2025-12-16T13:05:53.148920986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sc2zv,Uid:da16849a-2afd-49e7-91d5-6aafd4f3fe06,Namespace:calico-system,Attempt:0,} returns sandbox id \"e95e7265c0a3f6add25bdd8ab88e70dfa6f822898e438fd59cd4fe833db64dfc\"" Dec 16 13:05:53.150009 containerd[2510]: time="2025-12-16T13:05:53.149986725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:05:53.185858 containerd[2510]: time="2025-12-16T13:05:53.185813100Z" level=info msg="connecting to shim 7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9" address="unix:///run/containerd/s/1c219911a1c3c181f37063ed9847b2d4936ee48be8a0a941aac1aaec07f122ca" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:53.216023 systemd[1]: Started cri-containerd-7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9.scope - libcontainer container 7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9. Dec 16 13:05:53.223000 audit: BPF prog-id=275 op=LOAD Dec 16 13:05:53.223000 audit: BPF prog-id=276 op=LOAD Dec 16 13:05:53.223000 audit[6035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=276 op=UNLOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=277 op=LOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=278 op=LOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=278 op=UNLOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=277 op=UNLOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.224000 audit: BPF prog-id=279 op=LOAD Dec 16 13:05:53.224000 audit[6035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6024 pid=6035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737313564663765313539373731666166306334353031633364303131 Dec 16 13:05:53.257293 containerd[2510]: time="2025-12-16T13:05:53.257271680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2slmb,Uid:aa48803c-33bc-4da1-94e0-bc256a6f415a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7715df7e159771faf0c4501c3d011d422f90a050c52fa7ef9eea6eb925376ef9\"" Dec 16 13:05:53.440987 containerd[2510]: time="2025-12-16T13:05:53.440798479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:53.444598 containerd[2510]: time="2025-12-16T13:05:53.444547025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:05:53.444703 containerd[2510]: time="2025-12-16T13:05:53.444642286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:53.444916 kubelet[3994]: E1216 13:05:53.444871 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:05:53.446734 kubelet[3994]: E1216 13:05:53.444934 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:05:53.446734 kubelet[3994]: E1216 13:05:53.445179 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:53.447180 containerd[2510]: time="2025-12-16T13:05:53.446926215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:05:53.618001 systemd-networkd[2142]: cali7ed80a4e8c6: Gained IPv6LL Dec 16 13:05:53.721209 containerd[2510]: time="2025-12-16T13:05:53.721089437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:53.724052 containerd[2510]: time="2025-12-16T13:05:53.724010944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:05:53.724149 containerd[2510]: time="2025-12-16T13:05:53.724100851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:53.724373 kubelet[3994]: E1216 13:05:53.724331 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:53.724455 kubelet[3994]: E1216 13:05:53.724444 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:05:53.724906 kubelet[3994]: E1216 13:05:53.724749 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb74h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:53.725446 containerd[2510]: time="2025-12-16T13:05:53.725297812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:05:53.726632 kubelet[3994]: E1216 13:05:53.726563 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:05:53.978666 kubelet[3994]: E1216 13:05:53.977322 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:05:53.980738 kubelet[3994]: E1216 13:05:53.980706 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:05:54.006017 containerd[2510]: time="2025-12-16T13:05:54.005980665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:54.011014 containerd[2510]: time="2025-12-16T13:05:54.010940371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:05:54.011111 containerd[2510]: time="2025-12-16T13:05:54.010980218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:54.011223 kubelet[3994]: E1216 13:05:54.011189 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:05:54.011223 kubelet[3994]: E1216 13:05:54.011224 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:05:54.011399 kubelet[3994]: E1216 13:05:54.011352 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:54.013029 kubelet[3994]: E1216 13:05:54.012983 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:54.027000 audit[6067]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=6067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:54.027000 audit[6067]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3deee330 a2=0 a3=7ffc3deee31c items=0 ppid=4104 pid=6067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:54.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:54.030000 audit[6067]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=6067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:05:54.030000 audit[6067]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3deee330 a2=0 a3=7ffc3deee31c items=0 ppid=4104 pid=6067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:05:54.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:05:54.962550 systemd-networkd[2142]: calic3f60cd9b28: Gained IPv6LL Dec 16 13:05:54.962859 systemd-networkd[2142]: cali0a5b2e88028: Gained IPv6LL Dec 16 13:05:54.983145 kubelet[3994]: E1216 13:05:54.982971 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:05:54.984766 kubelet[3994]: E1216 13:05:54.984451 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:05:58.823552 containerd[2510]: time="2025-12-16T13:05:58.823417723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:05:59.098962 containerd[2510]: time="2025-12-16T13:05:59.098818304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:59.102257 containerd[2510]: time="2025-12-16T13:05:59.102225104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:05:59.102331 containerd[2510]: time="2025-12-16T13:05:59.102307078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:59.102498 kubelet[3994]: E1216 13:05:59.102456 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:59.102886 kubelet[3994]: E1216 13:05:59.102512 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:05:59.102886 kubelet[3994]: E1216 13:05:59.102647 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b02e6467f2cc48f990d97e006e40e793,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:59.105349 containerd[2510]: time="2025-12-16T13:05:59.105247594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:05:59.389362 containerd[2510]: time="2025-12-16T13:05:59.389234670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:05:59.394192 containerd[2510]: time="2025-12-16T13:05:59.394144665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:05:59.394305 containerd[2510]: time="2025-12-16T13:05:59.394250553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:05:59.394428 kubelet[3994]: E1216 13:05:59.394380 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:59.394505 kubelet[3994]: E1216 13:05:59.394445 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:05:59.394609 kubelet[3994]: E1216 13:05:59.394580 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:05:59.396230 kubelet[3994]: E1216 13:05:59.396151 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:06:02.824199 containerd[2510]: time="2025-12-16T13:06:02.824139522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:03.101263 containerd[2510]: time="2025-12-16T13:06:03.101132389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:03.104269 containerd[2510]: time="2025-12-16T13:06:03.104216997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:03.104269 containerd[2510]: time="2025-12-16T13:06:03.104248340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:03.104554 kubelet[3994]: E1216 13:06:03.104522 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:03.104938 kubelet[3994]: E1216 13:06:03.104568 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:03.104938 kubelet[3994]: E1216 13:06:03.104720 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rxsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:03.105973 kubelet[3994]: E1216 13:06:03.105921 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:06:04.824389 containerd[2510]: time="2025-12-16T13:06:04.823900125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:05.102180 containerd[2510]: time="2025-12-16T13:06:05.102052861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:05.105708 containerd[2510]: time="2025-12-16T13:06:05.105600121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:05.105708 containerd[2510]: time="2025-12-16T13:06:05.105626282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:05.105975 kubelet[3994]: E1216 13:06:05.105942 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:05.106294 kubelet[3994]: E1216 13:06:05.105990 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:05.106294 kubelet[3994]: E1216 13:06:05.106154 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bv8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:05.107506 kubelet[3994]: E1216 13:06:05.107443 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:06:05.825102 containerd[2510]: time="2025-12-16T13:06:05.825034524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:06.103362 containerd[2510]: time="2025-12-16T13:06:06.103200069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:06.106564 containerd[2510]: time="2025-12-16T13:06:06.106507915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:06.106689 containerd[2510]: time="2025-12-16T13:06:06.106609365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:06.106845 kubelet[3994]: E1216 13:06:06.106796 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:06.107133 kubelet[3994]: E1216 13:06:06.106879 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:06.107133 kubelet[3994]: E1216 13:06:06.107022 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:06.110158 containerd[2510]: time="2025-12-16T13:06:06.110124651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:06.391939 containerd[2510]: time="2025-12-16T13:06:06.391726938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:06.395514 containerd[2510]: time="2025-12-16T13:06:06.395457133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:06.395514 containerd[2510]: time="2025-12-16T13:06:06.395459650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:06.395770 kubelet[3994]: E1216 13:06:06.395726 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:06.395834 kubelet[3994]: E1216 13:06:06.395793 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:06.396157 kubelet[3994]: E1216 13:06:06.395964 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:06.397505 kubelet[3994]: E1216 13:06:06.397459 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:06:08.823871 containerd[2510]: time="2025-12-16T13:06:08.823740502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:09.093680 containerd[2510]: time="2025-12-16T13:06:09.093549267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:09.096904 containerd[2510]: time="2025-12-16T13:06:09.096861779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:09.097013 containerd[2510]: time="2025-12-16T13:06:09.096874471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:09.097148 kubelet[3994]: E1216 13:06:09.097107 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:09.098012 kubelet[3994]: E1216 13:06:09.097162 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:09.098012 kubelet[3994]: E1216 13:06:09.097553 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:09.098167 containerd[2510]: time="2025-12-16T13:06:09.097463803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:09.098835 kubelet[3994]: E1216 13:06:09.098799 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:06:09.390776 containerd[2510]: time="2025-12-16T13:06:09.390640088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:09.396354 containerd[2510]: time="2025-12-16T13:06:09.396230313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:09.396354 containerd[2510]: time="2025-12-16T13:06:09.396257218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:09.396660 kubelet[3994]: E1216 13:06:09.396622 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:09.396715 kubelet[3994]: E1216 13:06:09.396676 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:09.396915 kubelet[3994]: E1216 13:06:09.396828 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb74h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:09.398209 kubelet[3994]: E1216 13:06:09.398170 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:06:11.825432 kubelet[3994]: E1216 13:06:11.825267 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:06:15.824641 kubelet[3994]: E1216 13:06:15.824451 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:06:15.824641 kubelet[3994]: E1216 13:06:15.824541 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:06:18.825297 kubelet[3994]: E1216 13:06:18.825235 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:06:20.824748 kubelet[3994]: E1216 13:06:20.824697 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:06:22.823989 containerd[2510]: time="2025-12-16T13:06:22.823919193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:06:23.120978 containerd[2510]: time="2025-12-16T13:06:23.120853555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:23.125819 containerd[2510]: time="2025-12-16T13:06:23.125782026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:06:23.125928 containerd[2510]: time="2025-12-16T13:06:23.125885832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:23.126020 kubelet[3994]: E1216 13:06:23.125992 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:23.126359 kubelet[3994]: E1216 13:06:23.126034 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:06:23.126359 kubelet[3994]: E1216 13:06:23.126162 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b02e6467f2cc48f990d97e006e40e793,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:23.129021 containerd[2510]: time="2025-12-16T13:06:23.128988494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:06:23.391378 containerd[2510]: time="2025-12-16T13:06:23.391253539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:23.394643 containerd[2510]: time="2025-12-16T13:06:23.394528285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:06:23.394643 containerd[2510]: time="2025-12-16T13:06:23.394555270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:23.394855 kubelet[3994]: E1216 13:06:23.394788 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:23.394914 kubelet[3994]: E1216 13:06:23.394852 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:06:23.395048 kubelet[3994]: E1216 13:06:23.395004 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:23.396594 kubelet[3994]: E1216 13:06:23.396524 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:06:23.826868 kubelet[3994]: E1216 13:06:23.826423 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:06:26.824266 containerd[2510]: time="2025-12-16T13:06:26.823465323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:27.127956 containerd[2510]: time="2025-12-16T13:06:27.127809533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:27.135893 containerd[2510]: time="2025-12-16T13:06:27.135145077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:27.136163 containerd[2510]: time="2025-12-16T13:06:27.135876652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:27.136323 kubelet[3994]: E1216 13:06:27.136294 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:27.137113 kubelet[3994]: E1216 13:06:27.136649 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:27.137297 kubelet[3994]: E1216 13:06:27.137257 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bv8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:27.138577 kubelet[3994]: E1216 13:06:27.138543 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:06:30.823503 containerd[2510]: time="2025-12-16T13:06:30.823266885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:06:31.096797 containerd[2510]: time="2025-12-16T13:06:31.096667563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:31.100142 containerd[2510]: time="2025-12-16T13:06:31.100021171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:06:31.100142 containerd[2510]: time="2025-12-16T13:06:31.100065636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:31.100501 kubelet[3994]: E1216 13:06:31.100469 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:31.101164 kubelet[3994]: E1216 13:06:31.100926 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:06:31.101258 kubelet[3994]: E1216 13:06:31.101183 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rxsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:31.101607 containerd[2510]: time="2025-12-16T13:06:31.101585235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:06:31.103019 kubelet[3994]: E1216 13:06:31.102952 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:06:31.365459 containerd[2510]: time="2025-12-16T13:06:31.365327082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:31.368443 containerd[2510]: time="2025-12-16T13:06:31.368409203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:06:31.368560 containerd[2510]: time="2025-12-16T13:06:31.368493310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:31.368655 kubelet[3994]: E1216 13:06:31.368624 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:31.368706 kubelet[3994]: E1216 13:06:31.368671 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:06:31.368847 kubelet[3994]: E1216 13:06:31.368806 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:31.371447 containerd[2510]: time="2025-12-16T13:06:31.371408874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:06:31.652073 containerd[2510]: time="2025-12-16T13:06:31.651947572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:31.657429 containerd[2510]: time="2025-12-16T13:06:31.657366513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:06:31.657429 containerd[2510]: time="2025-12-16T13:06:31.657402712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:31.657624 kubelet[3994]: E1216 13:06:31.657584 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:31.657681 kubelet[3994]: E1216 13:06:31.657638 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:06:31.658175 kubelet[3994]: E1216 13:06:31.657828 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:31.659614 kubelet[3994]: E1216 13:06:31.659489 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:06:31.824914 containerd[2510]: time="2025-12-16T13:06:31.824509907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:06:32.105584 containerd[2510]: time="2025-12-16T13:06:32.105533114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:32.109890 containerd[2510]: time="2025-12-16T13:06:32.109796809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:06:32.110121 kubelet[3994]: E1216 13:06:32.110086 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:32.110434 kubelet[3994]: E1216 13:06:32.110135 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:06:32.110434 kubelet[3994]: E1216 13:06:32.110281 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:32.110697 containerd[2510]: time="2025-12-16T13:06:32.109857823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:32.111821 kubelet[3994]: E1216 13:06:32.111772 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:06:36.824933 kubelet[3994]: E1216 13:06:36.824770 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:06:37.825240 containerd[2510]: time="2025-12-16T13:06:37.824822639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:06:38.109045 containerd[2510]: time="2025-12-16T13:06:38.108923266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:06:38.113406 containerd[2510]: time="2025-12-16T13:06:38.113346928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:06:38.113515 containerd[2510]: time="2025-12-16T13:06:38.113357266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:06:38.113626 kubelet[3994]: E1216 13:06:38.113592 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:38.113962 kubelet[3994]: E1216 13:06:38.113633 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:06:38.113962 kubelet[3994]: E1216 13:06:38.113788 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb74h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:06:38.115075 kubelet[3994]: E1216 13:06:38.115011 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:06:38.823746 kubelet[3994]: E1216 13:06:38.823672 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:06:41.826930 kubelet[3994]: E1216 13:06:41.826177 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:06:44.827732 kubelet[3994]: E1216 13:06:44.827681 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:06:46.824988 kubelet[3994]: E1216 13:06:46.824942 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:06:48.823873 kubelet[3994]: E1216 13:06:48.823758 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:06:49.830288 kubelet[3994]: E1216 13:06:49.829941 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:06:51.097977 systemd[1]: Started sshd@7-10.200.4.31:22-10.200.16.10:49516.service - OpenSSH per-connection server daemon (10.200.16.10:49516). Dec 16 13:06:51.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.31:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:51.100527 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 13:06:51.102466 kernel: audit: type=1130 audit(1765890411.097:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.31:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:51.635000 audit[6164]: USER_ACCT pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.642866 kernel: audit: type=1101 audit(1765890411.635:756): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.642959 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 49516 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:51.643530 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:51.642000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.649868 kernel: audit: type=1103 audit(1765890411.642:757): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.642000 audit[6164]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5304aa20 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:51.657833 kernel: audit: type=1006 audit(1765890411.642:758): pid=6164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 13:06:51.657902 kernel: audit: type=1300 audit(1765890411.642:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5304aa20 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:51.663323 systemd-logind[2482]: New session 10 of user core. Dec 16 13:06:51.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:51.665866 kernel: audit: type=1327 audit(1765890411.642:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:51.668201 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:06:51.670000 audit[6164]: USER_START pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.676000 audit[6167]: CRED_ACQ pid=6167 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.680597 kernel: audit: type=1105 audit(1765890411.670:759): pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.680662 kernel: audit: type=1103 audit(1765890411.676:760): pid=6167 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:51.826096 kubelet[3994]: E1216 13:06:51.825672 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:06:52.057412 sshd[6167]: Connection closed by 10.200.16.10 port 49516 Dec 16 13:06:52.058112 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:52.059000 audit[6164]: USER_END pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.064000 audit[6164]: CRED_DISP pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.070327 kernel: audit: type=1106 audit(1765890412.059:761): pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.070391 kernel: audit: type=1104 audit(1765890412.064:762): pid=6164 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:52.072620 systemd[1]: sshd@7-10.200.4.31:22-10.200.16.10:49516.service: Deactivated successfully. Dec 16 13:06:52.074276 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:06:52.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.31:22-10.200.16.10:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:52.076703 systemd-logind[2482]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:06:52.077805 systemd-logind[2482]: Removed session 10. Dec 16 13:06:55.827008 kubelet[3994]: E1216 13:06:55.826828 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:06:56.825129 kubelet[3994]: E1216 13:06:56.824799 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:06:57.166512 systemd[1]: Started sshd@8-10.200.4.31:22-10.200.16.10:49524.service - OpenSSH per-connection server daemon (10.200.16.10:49524). Dec 16 13:06:57.173795 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:06:57.173901 kernel: audit: type=1130 audit(1765890417.166:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.31:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:57.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.31:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:57.691000 audit[6180]: USER_ACCT pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.698934 kernel: audit: type=1101 audit(1765890417.691:765): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.699021 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 49524 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:06:57.699472 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:57.698000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.704873 kernel: audit: type=1103 audit(1765890417.698:766): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.708872 kernel: audit: type=1006 audit(1765890417.698:767): pid=6180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 13:06:57.698000 audit[6180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec9f629c0 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:57.716197 systemd-logind[2482]: New session 11 of user core. Dec 16 13:06:57.717239 kernel: audit: type=1300 audit(1765890417.698:767): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec9f629c0 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:06:57.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:57.720302 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:06:57.720877 kernel: audit: type=1327 audit(1765890417.698:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:06:57.723000 audit[6180]: USER_START pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.733885 kernel: audit: type=1105 audit(1765890417.723:768): pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.734000 audit[6183]: CRED_ACQ pid=6183 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:57.743863 kernel: audit: type=1103 audit(1765890417.734:769): pid=6183 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:58.067969 sshd[6183]: Connection closed by 10.200.16.10 port 49524 Dec 16 13:06:58.070032 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:58.070000 audit[6180]: USER_END pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:58.075019 systemd[1]: sshd@8-10.200.4.31:22-10.200.16.10:49524.service: Deactivated successfully. Dec 16 13:06:58.079412 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:06:58.082007 kernel: audit: type=1106 audit(1765890418.070:770): pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:58.083893 systemd-logind[2482]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:06:58.085206 systemd-logind[2482]: Removed session 11. Dec 16 13:06:58.070000 audit[6180]: CRED_DISP pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:58.094880 kernel: audit: type=1104 audit(1765890418.070:771): pid=6180 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:06:58.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.31:22-10.200.16.10:49524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:06:59.826568 kubelet[3994]: E1216 13:06:59.825867 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:07:00.825239 kubelet[3994]: E1216 13:07:00.825196 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:07:03.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.31:22-10.200.16.10:41496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:03.179060 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:03.179097 kernel: audit: type=1130 audit(1765890423.176:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.31:22-10.200.16.10:41496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:03.177397 systemd[1]: Started sshd@9-10.200.4.31:22-10.200.16.10:41496.service - OpenSSH per-connection server daemon (10.200.16.10:41496). Dec 16 13:07:03.692000 audit[6196]: USER_ACCT pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.703463 sshd[6196]: Accepted publickey for core from 10.200.16.10 port 41496 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:03.703867 kernel: audit: type=1101 audit(1765890423.692:774): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.704077 sshd-session[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:03.702000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.720758 systemd-logind[2482]: New session 12 of user core. Dec 16 13:07:03.721354 kernel: audit: type=1103 audit(1765890423.702:775): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.727311 kernel: audit: type=1006 audit(1765890423.702:776): pid=6196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 13:07:03.726868 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:03.702000 audit[6196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeec4312e0 a2=3 a3=0 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:03.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:03.743756 kernel: audit: type=1300 audit(1765890423.702:776): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeec4312e0 a2=3 a3=0 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:03.743822 kernel: audit: type=1327 audit(1765890423.702:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:03.740000 audit[6196]: USER_START pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.755876 kernel: audit: type=1105 audit(1765890423.740:777): pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.755000 audit[6199]: CRED_ACQ pid=6199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:03.764882 kernel: audit: type=1103 audit(1765890423.755:778): pid=6199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:04.070630 sshd[6199]: Connection closed by 10.200.16.10 port 41496 Dec 16 13:07:04.071150 sshd-session[6196]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:04.072000 audit[6196]: USER_END pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:04.076229 systemd[1]: sshd@9-10.200.4.31:22-10.200.16.10:41496.service: Deactivated successfully. Dec 16 13:07:04.079069 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:04.081875 kernel: audit: type=1106 audit(1765890424.072:779): pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:04.072000 audit[6196]: CRED_DISP pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:04.087931 systemd-logind[2482]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:04.088882 systemd-logind[2482]: Removed session 12. Dec 16 13:07:04.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.31:22-10.200.16.10:41496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:04.091876 kernel: audit: type=1104 audit(1765890424.072:780): pid=6196 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:04.824651 containerd[2510]: time="2025-12-16T13:07:04.824591112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:07:05.097142 containerd[2510]: time="2025-12-16T13:07:05.097013592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:05.102176 containerd[2510]: time="2025-12-16T13:07:05.102111798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:07:05.102258 containerd[2510]: time="2025-12-16T13:07:05.102221847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:05.102441 kubelet[3994]: E1216 13:07:05.102353 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:05.102441 kubelet[3994]: E1216 13:07:05.102408 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:07:05.103001 kubelet[3994]: E1216 13:07:05.102896 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b02e6467f2cc48f990d97e006e40e793,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:05.105183 containerd[2510]: time="2025-12-16T13:07:05.104974005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:07:05.398078 containerd[2510]: time="2025-12-16T13:07:05.397950077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:05.403475 containerd[2510]: time="2025-12-16T13:07:05.403361166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:07:05.403475 containerd[2510]: time="2025-12-16T13:07:05.403398445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:05.404507 kubelet[3994]: E1216 13:07:05.403644 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:05.404594 kubelet[3994]: E1216 13:07:05.404526 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:07:05.404722 kubelet[3994]: E1216 13:07:05.404679 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxw2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-556dc6d57-6kbq4_calico-system(78647b74-1322-40dc-8769-efb3043691d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:05.406268 kubelet[3994]: E1216 13:07:05.406219 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:07:05.829718 kubelet[3994]: E1216 13:07:05.828257 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:07:07.826159 kubelet[3994]: E1216 13:07:07.825952 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:07:09.178124 systemd[1]: Started sshd@10-10.200.4.31:22-10.200.16.10:41508.service - OpenSSH per-connection server daemon (10.200.16.10:41508). Dec 16 13:07:09.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.31:22-10.200.16.10:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:09.180697 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:09.180783 kernel: audit: type=1130 audit(1765890429.176:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.31:22-10.200.16.10:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:09.709000 audit[6220]: USER_ACCT pid=6220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.711998 sshd[6220]: Accepted publickey for core from 10.200.16.10 port 41508 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:09.713765 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:09.719099 systemd-logind[2482]: New session 13 of user core. Dec 16 13:07:09.711000 audit[6220]: CRED_ACQ pid=6220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.725288 kernel: audit: type=1101 audit(1765890429.709:783): pid=6220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.725445 kernel: audit: type=1103 audit(1765890429.711:784): pid=6220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.729512 kernel: audit: type=1006 audit(1765890429.711:785): pid=6220 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 13:07:09.730062 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:07:09.711000 audit[6220]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea6f3c810 a2=3 a3=0 items=0 ppid=1 pid=6220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.738871 kernel: audit: type=1300 audit(1765890429.711:785): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea6f3c810 a2=3 a3=0 items=0 ppid=1 pid=6220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:09.739123 kernel: audit: type=1327 audit(1765890429.711:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:09.711000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:09.737000 audit[6220]: USER_START pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.741000 audit[6224]: CRED_ACQ pid=6224 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.753872 kernel: audit: type=1105 audit(1765890429.737:786): pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:09.753965 kernel: audit: type=1103 audit(1765890429.741:787): pid=6224 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:10.056982 sshd[6224]: Connection closed by 10.200.16.10 port 41508 Dec 16 13:07:10.058040 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:10.057000 audit[6220]: USER_END pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:10.062455 systemd[1]: sshd@10-10.200.4.31:22-10.200.16.10:41508.service: Deactivated successfully. Dec 16 13:07:10.064544 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:07:10.057000 audit[6220]: CRED_DISP pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:10.068185 systemd-logind[2482]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:07:10.069335 systemd-logind[2482]: Removed session 13. Dec 16 13:07:10.072751 kernel: audit: type=1106 audit(1765890430.057:788): pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:10.072820 kernel: audit: type=1104 audit(1765890430.057:789): pid=6220 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:10.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.31:22-10.200.16.10:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:10.823159 kubelet[3994]: E1216 13:07:10.823119 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:07:10.823159 kubelet[3994]: E1216 13:07:10.823119 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:07:14.823130 kubelet[3994]: E1216 13:07:14.823039 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:07:15.172172 systemd[1]: Started sshd@11-10.200.4.31:22-10.200.16.10:51360.service - OpenSSH per-connection server daemon (10.200.16.10:51360). Dec 16 13:07:15.182160 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:15.182193 kernel: audit: type=1130 audit(1765890435.171:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.31:22-10.200.16.10:51360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:15.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.31:22-10.200.16.10:51360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:15.696000 audit[6240]: USER_ACCT pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.700967 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 51360 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:15.701905 kernel: audit: type=1101 audit(1765890435.696:792): pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.701000 audit[6240]: CRED_ACQ pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.703037 sshd-session[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:15.711956 kernel: audit: type=1103 audit(1765890435.701:793): pid=6240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.712023 kernel: audit: type=1006 audit(1765890435.701:794): pid=6240 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 13:07:15.712794 systemd-logind[2482]: New session 14 of user core. Dec 16 13:07:15.701000 audit[6240]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1838a940 a2=3 a3=0 items=0 ppid=1 pid=6240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.718434 kernel: audit: type=1300 audit(1765890435.701:794): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1838a940 a2=3 a3=0 items=0 ppid=1 pid=6240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:15.701000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:15.721607 kernel: audit: type=1327 audit(1765890435.701:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:15.724037 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:07:15.725000 audit[6240]: USER_START pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.727000 audit[6243]: CRED_ACQ pid=6243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.738731 kernel: audit: type=1105 audit(1765890435.725:795): pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:15.738801 kernel: audit: type=1103 audit(1765890435.727:796): pid=6243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.049831 sshd[6243]: Connection closed by 10.200.16.10 port 51360 Dec 16 13:07:16.050660 sshd-session[6240]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:16.051000 audit[6240]: USER_END pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.057308 systemd[1]: sshd@11-10.200.4.31:22-10.200.16.10:51360.service: Deactivated successfully. Dec 16 13:07:16.063508 kernel: audit: type=1106 audit(1765890436.051:797): pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.063564 kernel: audit: type=1104 audit(1765890436.051:798): pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.051000 audit[6240]: CRED_DISP pid=6240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:16.062119 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:07:16.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.31:22-10.200.16.10:51360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:16.065421 systemd-logind[2482]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:07:16.066185 systemd-logind[2482]: Removed session 14. Dec 16 13:07:18.824269 kubelet[3994]: E1216 13:07:18.824183 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:07:19.825880 containerd[2510]: time="2025-12-16T13:07:19.825760625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:20.097889 containerd[2510]: time="2025-12-16T13:07:20.097753524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:20.102925 containerd[2510]: time="2025-12-16T13:07:20.102834667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:20.103041 containerd[2510]: time="2025-12-16T13:07:20.102874609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:20.103224 kubelet[3994]: E1216 13:07:20.103192 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:20.103519 kubelet[3994]: E1216 13:07:20.103249 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:20.103595 kubelet[3994]: E1216 13:07:20.103548 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bv8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-r577f_calico-apiserver(dd14feb0-ccbc-4867-9fa9-0c2099e4adc4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:20.104873 kubelet[3994]: E1216 13:07:20.104815 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:07:21.156346 systemd[1]: Started sshd@12-10.200.4.31:22-10.200.16.10:41272.service - OpenSSH per-connection server daemon (10.200.16.10:41272). Dec 16 13:07:21.163511 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:21.163572 kernel: audit: type=1130 audit(1765890441.155:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.31:22-10.200.16.10:41272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:21.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.31:22-10.200.16.10:41272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:21.679000 audit[6295]: USER_ACCT pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.685915 kernel: audit: type=1101 audit(1765890441.679:801): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.684211 sshd-session[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.686284 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 41272 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:21.682000 audit[6295]: CRED_ACQ pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.691959 kernel: audit: type=1103 audit(1765890441.682:802): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.695917 kernel: audit: type=1006 audit(1765890441.683:803): pid=6295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 13:07:21.700328 systemd-logind[2482]: New session 15 of user core. Dec 16 13:07:21.683000 audit[6295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd363a8b0 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:21.683000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:21.708672 kernel: audit: type=1300 audit(1765890441.683:803): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd363a8b0 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:21.708789 kernel: audit: type=1327 audit(1765890441.683:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:21.712049 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:21.714000 audit[6295]: USER_START pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.721862 kernel: audit: type=1105 audit(1765890441.714:804): pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.723000 audit[6298]: CRED_ACQ pid=6298 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:21.729877 kernel: audit: type=1103 audit(1765890441.723:805): pid=6298 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.058601 sshd[6298]: Connection closed by 10.200.16.10 port 41272 Dec 16 13:07:22.061446 sshd-session[6295]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:22.061000 audit[6295]: USER_END pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.071677 systemd[1]: sshd@12-10.200.4.31:22-10.200.16.10:41272.service: Deactivated successfully. Dec 16 13:07:22.072029 kernel: audit: type=1106 audit(1765890442.061:806): pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.073330 systemd-logind[2482]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:22.074632 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:22.061000 audit[6295]: CRED_DISP pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.084864 kernel: audit: type=1104 audit(1765890442.061:807): pid=6295 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.085108 systemd-logind[2482]: Removed session 15. Dec 16 13:07:22.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.31:22-10.200.16.10:41272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:22.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.31:22-10.200.16.10:41274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:22.184831 systemd[1]: Started sshd@13-10.200.4.31:22-10.200.16.10:41274.service - OpenSSH per-connection server daemon (10.200.16.10:41274). Dec 16 13:07:22.706000 audit[6311]: USER_ACCT pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.707830 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 41274 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:22.707000 audit[6311]: CRED_ACQ pid=6311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.707000 audit[6311]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9c4932d0 a2=3 a3=0 items=0 ppid=1 pid=6311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:22.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:22.709135 sshd-session[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:22.714072 systemd-logind[2482]: New session 16 of user core. Dec 16 13:07:22.719144 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:22.721000 audit[6311]: USER_START pid=6311 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.723000 audit[6314]: CRED_ACQ pid=6314 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:22.826860 containerd[2510]: time="2025-12-16T13:07:22.826718819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:07:23.102947 containerd[2510]: time="2025-12-16T13:07:23.102725352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:23.106832 containerd[2510]: time="2025-12-16T13:07:23.106760586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:07:23.107975 kubelet[3994]: E1216 13:07:23.107882 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:23.107975 kubelet[3994]: E1216 13:07:23.107942 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:07:23.108303 kubelet[3994]: E1216 13:07:23.108163 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rxsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-ddb857f6f-6zsvz_calico-apiserver(e585732d-c5ba-41d4-91da-20d86215882e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:23.108467 containerd[2510]: time="2025-12-16T13:07:23.106783872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:23.108805 containerd[2510]: time="2025-12-16T13:07:23.108784654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:07:23.110074 kubelet[3994]: E1216 13:07:23.110040 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:07:23.151864 sshd[6314]: Connection closed by 10.200.16.10 port 41274 Dec 16 13:07:23.153017 sshd-session[6311]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:23.153000 audit[6311]: USER_END pid=6311 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.154000 audit[6311]: CRED_DISP pid=6311 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.158634 systemd[1]: sshd@13-10.200.4.31:22-10.200.16.10:41274.service: Deactivated successfully. Dec 16 13:07:23.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.31:22-10.200.16.10:41274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:23.162422 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:23.164643 systemd-logind[2482]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:23.168962 systemd-logind[2482]: Removed session 16. Dec 16 13:07:23.259158 systemd[1]: Started sshd@14-10.200.4.31:22-10.200.16.10:41282.service - OpenSSH per-connection server daemon (10.200.16.10:41282). Dec 16 13:07:23.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.31:22-10.200.16.10:41282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:23.386247 containerd[2510]: time="2025-12-16T13:07:23.385669364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:23.392217 containerd[2510]: time="2025-12-16T13:07:23.392167785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:07:23.392324 containerd[2510]: time="2025-12-16T13:07:23.392275423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:23.392518 kubelet[3994]: E1216 13:07:23.392479 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:23.392561 kubelet[3994]: E1216 13:07:23.392545 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:07:23.393864 kubelet[3994]: E1216 13:07:23.392782 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:23.394056 containerd[2510]: time="2025-12-16T13:07:23.392924693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:07:23.665488 containerd[2510]: time="2025-12-16T13:07:23.665202083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:23.671324 containerd[2510]: time="2025-12-16T13:07:23.671274173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:07:23.671442 containerd[2510]: time="2025-12-16T13:07:23.671383278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:23.671619 kubelet[3994]: E1216 13:07:23.671575 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:23.671679 kubelet[3994]: E1216 13:07:23.671641 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:07:23.673022 kubelet[3994]: E1216 13:07:23.672954 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76746db8cb-s8hqj_calico-system(17414321-ac90-43ed-affc-521db178bc15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:23.673683 containerd[2510]: time="2025-12-16T13:07:23.673656786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:07:23.675096 kubelet[3994]: E1216 13:07:23.675066 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:07:23.783000 audit[6324]: USER_ACCT pid=6324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.784876 sshd[6324]: Accepted publickey for core from 10.200.16.10 port 41282 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:23.784000 audit[6324]: CRED_ACQ pid=6324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.785000 audit[6324]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc71a54bf0 a2=3 a3=0 items=0 ppid=1 pid=6324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:23.785000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:23.786465 sshd-session[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:23.791880 systemd-logind[2482]: New session 17 of user core. Dec 16 13:07:23.799048 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:23.802000 audit[6324]: USER_START pid=6324 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.803000 audit[6331]: CRED_ACQ pid=6331 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:23.941672 containerd[2510]: time="2025-12-16T13:07:23.941499487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:23.949002 containerd[2510]: time="2025-12-16T13:07:23.948875762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:07:23.949002 containerd[2510]: time="2025-12-16T13:07:23.948972779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:23.949407 kubelet[3994]: E1216 13:07:23.949362 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:23.949855 kubelet[3994]: E1216 13:07:23.949543 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:07:23.949855 kubelet[3994]: E1216 13:07:23.949689 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5jvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-sc2zv_calico-system(da16849a-2afd-49e7-91d5-6aafd4f3fe06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:23.950904 kubelet[3994]: E1216 13:07:23.950821 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:07:24.123866 sshd[6331]: Connection closed by 10.200.16.10 port 41282 Dec 16 13:07:24.122973 sshd-session[6324]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:24.123000 audit[6324]: USER_END pid=6324 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:24.123000 audit[6324]: CRED_DISP pid=6324 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:24.129362 systemd[1]: sshd@14-10.200.4.31:22-10.200.16.10:41282.service: Deactivated successfully. Dec 16 13:07:24.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.31:22-10.200.16.10:41282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:24.132895 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:07:24.135012 systemd-logind[2482]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:07:24.137623 systemd-logind[2482]: Removed session 17. Dec 16 13:07:27.828974 containerd[2510]: time="2025-12-16T13:07:27.828695242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:07:28.104266 containerd[2510]: time="2025-12-16T13:07:28.104131518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:07:28.108535 containerd[2510]: time="2025-12-16T13:07:28.108180111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:07:28.108535 containerd[2510]: time="2025-12-16T13:07:28.108277218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 13:07:28.108661 kubelet[3994]: E1216 13:07:28.108470 3994 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:28.108661 kubelet[3994]: E1216 13:07:28.108526 3994 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:07:28.108984 kubelet[3994]: E1216 13:07:28.108774 3994 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb74h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2slmb_calico-system(aa48803c-33bc-4da1-94e0-bc256a6f415a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:07:28.110871 kubelet[3994]: E1216 13:07:28.110181 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:07:29.231223 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 13:07:29.231337 kernel: audit: type=1130 audit(1765890449.229:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.31:22-10.200.16.10:41292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.31:22-10.200.16.10:41292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:29.230259 systemd[1]: Started sshd@15-10.200.4.31:22-10.200.16.10:41292.service - OpenSSH per-connection server daemon (10.200.16.10:41292). Dec 16 13:07:29.755000 audit[6350]: USER_ACCT pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.765486 kernel: audit: type=1101 audit(1765890449.755:828): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.765645 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 41292 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:29.766573 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:29.765000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.781959 kernel: audit: type=1103 audit(1765890449.765:829): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.786114 systemd-logind[2482]: New session 18 of user core. Dec 16 13:07:29.807058 kernel: audit: type=1006 audit(1765890449.765:830): pid=6350 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 13:07:29.807139 kernel: audit: type=1300 audit(1765890449.765:830): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2787a40 a2=3 a3=0 items=0 ppid=1 pid=6350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:29.765000 audit[6350]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2787a40 a2=3 a3=0 items=0 ppid=1 pid=6350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:29.807460 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:07:29.765000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:29.812913 kernel: audit: type=1327 audit(1765890449.765:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:29.812000 audit[6350]: USER_START pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.821866 kernel: audit: type=1105 audit(1765890449.812:831): pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.822000 audit[6353]: CRED_ACQ pid=6353 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:29.830877 kernel: audit: type=1103 audit(1765890449.822:832): pid=6353 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.099886 sshd[6353]: Connection closed by 10.200.16.10 port 41292 Dec 16 13:07:30.100778 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:30.102000 audit[6350]: USER_END pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.108838 systemd[1]: sshd@15-10.200.4.31:22-10.200.16.10:41292.service: Deactivated successfully. Dec 16 13:07:30.102000 audit[6350]: CRED_DISP pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.111956 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:07:30.116226 kernel: audit: type=1106 audit(1765890450.102:833): pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.116272 kernel: audit: type=1104 audit(1765890450.102:834): pid=6350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.116951 systemd-logind[2482]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:07:30.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.31:22-10.200.16.10:41292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:30.118919 systemd-logind[2482]: Removed session 18. Dec 16 13:07:30.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.31:22-10.200.16.10:35954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:30.204351 systemd[1]: Started sshd@16-10.200.4.31:22-10.200.16.10:35954.service - OpenSSH per-connection server daemon (10.200.16.10:35954). Dec 16 13:07:30.715000 audit[6364]: USER_ACCT pid=6364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.716549 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 35954 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:30.716000 audit[6364]: CRED_ACQ pid=6364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.716000 audit[6364]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed0c1f240 a2=3 a3=0 items=0 ppid=1 pid=6364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:30.716000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:30.717676 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:30.722330 systemd-logind[2482]: New session 19 of user core. Dec 16 13:07:30.727027 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:07:30.729000 audit[6364]: USER_START pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:30.731000 audit[6367]: CRED_ACQ pid=6367 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.153444 sshd[6367]: Connection closed by 10.200.16.10 port 35954 Dec 16 13:07:31.152091 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:31.151000 audit[6364]: USER_END pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.151000 audit[6364]: CRED_DISP pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.157077 systemd[1]: sshd@16-10.200.4.31:22-10.200.16.10:35954.service: Deactivated successfully. Dec 16 13:07:31.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.31:22-10.200.16.10:35954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:31.159987 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:07:31.161898 systemd-logind[2482]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:07:31.164739 systemd-logind[2482]: Removed session 19. Dec 16 13:07:31.259357 systemd[1]: Started sshd@17-10.200.4.31:22-10.200.16.10:35970.service - OpenSSH per-connection server daemon (10.200.16.10:35970). Dec 16 13:07:31.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.31:22-10.200.16.10:35970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:31.788000 audit[6376]: USER_ACCT pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.790287 sshd[6376]: Accepted publickey for core from 10.200.16.10 port 35970 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:31.789000 audit[6376]: CRED_ACQ pid=6376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.789000 audit[6376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfe45d300 a2=3 a3=0 items=0 ppid=1 pid=6376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:31.789000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:31.792331 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:31.803778 systemd-logind[2482]: New session 20 of user core. Dec 16 13:07:31.808587 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:07:31.811000 audit[6376]: USER_START pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:31.813000 audit[6379]: CRED_ACQ pid=6379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.634000 audit[6389]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:32.634000 audit[6389]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe14dc0d60 a2=0 a3=7ffe14dc0d4c items=0 ppid=4104 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.634000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:32.639000 audit[6389]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:32.639000 audit[6389]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe14dc0d60 a2=0 a3=0 items=0 ppid=4104 pid=6389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:32.655000 audit[6391]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=6391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:32.655000 audit[6391]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffefd3e2aa0 a2=0 a3=7ffefd3e2a8c items=0 ppid=4104 pid=6391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:32.663000 audit[6391]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:32.663000 audit[6391]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffefd3e2aa0 a2=0 a3=0 items=0 ppid=4104 pid=6391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:32.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:32.748836 sshd[6379]: Connection closed by 10.200.16.10 port 35970 Dec 16 13:07:32.749412 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:32.748000 audit[6376]: USER_END pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.749000 audit[6376]: CRED_DISP pid=6376 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:32.753129 systemd-logind[2482]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:07:32.753800 systemd[1]: sshd@17-10.200.4.31:22-10.200.16.10:35970.service: Deactivated successfully. Dec 16 13:07:32.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.31:22-10.200.16.10:35970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:32.757373 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:07:32.761837 systemd-logind[2482]: Removed session 20. Dec 16 13:07:32.823815 kubelet[3994]: E1216 13:07:32.823777 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:07:32.825234 kubelet[3994]: E1216 13:07:32.825194 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:07:32.871087 systemd[1]: Started sshd@18-10.200.4.31:22-10.200.16.10:35972.service - OpenSSH per-connection server daemon (10.200.16.10:35972). Dec 16 13:07:32.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.31:22-10.200.16.10:35972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:33.402000 audit[6396]: USER_ACCT pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.404086 sshd[6396]: Accepted publickey for core from 10.200.16.10 port 35972 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:33.403000 audit[6396]: CRED_ACQ pid=6396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.403000 audit[6396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5016b610 a2=3 a3=0 items=0 ppid=1 pid=6396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:33.403000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:33.405292 sshd-session[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:33.409982 systemd-logind[2482]: New session 21 of user core. Dec 16 13:07:33.416036 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:07:33.416000 audit[6396]: USER_START pid=6396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.418000 audit[6399]: CRED_ACQ pid=6399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.846026 sshd[6399]: Connection closed by 10.200.16.10 port 35972 Dec 16 13:07:33.846601 sshd-session[6396]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:33.846000 audit[6396]: USER_END pid=6396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.846000 audit[6396]: CRED_DISP pid=6396 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:33.849829 systemd[1]: sshd@18-10.200.4.31:22-10.200.16.10:35972.service: Deactivated successfully. Dec 16 13:07:33.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.31:22-10.200.16.10:35972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:33.851714 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:07:33.853926 systemd-logind[2482]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:07:33.854779 systemd-logind[2482]: Removed session 21. Dec 16 13:07:33.951350 systemd[1]: Started sshd@19-10.200.4.31:22-10.200.16.10:35984.service - OpenSSH per-connection server daemon (10.200.16.10:35984). Dec 16 13:07:33.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.31:22-10.200.16.10:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:34.489909 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 16 13:07:34.490031 kernel: audit: type=1101 audit(1765890454.477:868): pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.477000 audit[6409]: USER_ACCT pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.490127 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 35984 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:34.481679 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:34.480000 audit[6409]: CRED_ACQ pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.504529 systemd-logind[2482]: New session 22 of user core. Dec 16 13:07:34.508868 kernel: audit: type=1103 audit(1765890454.480:869): pid=6409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.510899 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:07:34.515870 kernel: audit: type=1006 audit(1765890454.480:870): pid=6409 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 13:07:34.480000 audit[6409]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc85d71870 a2=3 a3=0 items=0 ppid=1 pid=6409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:34.527868 kernel: audit: type=1300 audit(1765890454.480:870): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc85d71870 a2=3 a3=0 items=0 ppid=1 pid=6409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:34.480000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:34.534866 kernel: audit: type=1327 audit(1765890454.480:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:34.513000 audit[6409]: USER_START pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.547858 kernel: audit: type=1105 audit(1765890454.513:871): pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.527000 audit[6412]: CRED_ACQ pid=6412 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.557868 kernel: audit: type=1103 audit(1765890454.527:872): pid=6412 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.824430 kubelet[3994]: E1216 13:07:34.824105 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:07:34.826249 kubelet[3994]: E1216 13:07:34.826197 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:07:34.848047 sshd[6412]: Connection closed by 10.200.16.10 port 35984 Dec 16 13:07:34.849127 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:34.851000 audit[6409]: USER_END pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.854647 systemd-logind[2482]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:07:34.856520 systemd[1]: sshd@19-10.200.4.31:22-10.200.16.10:35984.service: Deactivated successfully. Dec 16 13:07:34.859468 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:07:34.861918 kernel: audit: type=1106 audit(1765890454.851:873): pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.863937 systemd-logind[2482]: Removed session 22. Dec 16 13:07:34.851000 audit[6409]: CRED_DISP pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.31:22-10.200.16.10:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:34.876992 kernel: audit: type=1104 audit(1765890454.851:874): pid=6409 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:34.877036 kernel: audit: type=1131 audit(1765890454.851:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.31:22-10.200.16.10:35984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:36.825482 kubelet[3994]: E1216 13:07:36.825441 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:07:39.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.31:22-10.200.16.10:35990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:39.964696 systemd[1]: Started sshd@20-10.200.4.31:22-10.200.16.10:35990.service - OpenSSH per-connection server daemon (10.200.16.10:35990). Dec 16 13:07:39.972227 kernel: audit: type=1130 audit(1765890459.963:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.31:22-10.200.16.10:35990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.476000 audit[6424]: USER_ACCT pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.478086 sshd[6424]: Accepted publickey for core from 10.200.16.10 port 35990 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:40.482064 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:40.486890 kernel: audit: type=1101 audit(1765890460.476:877): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.486972 kernel: audit: type=1103 audit(1765890460.480:878): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.480000 audit[6424]: CRED_ACQ pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.494280 kernel: audit: type=1006 audit(1765890460.480:879): pid=6424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 13:07:40.496151 kernel: audit: type=1300 audit(1765890460.480:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd6284de0 a2=3 a3=0 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:40.480000 audit[6424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd6284de0 a2=3 a3=0 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:40.507775 systemd-logind[2482]: New session 23 of user core. Dec 16 13:07:40.480000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:40.512866 kernel: audit: type=1327 audit(1765890460.480:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:40.514485 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:07:40.516000 audit[6424]: USER_START pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.526866 kernel: audit: type=1105 audit(1765890460.516:880): pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.528000 audit[6427]: CRED_ACQ pid=6427 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.537866 kernel: audit: type=1103 audit(1765890460.528:881): pid=6427 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.824741 kubelet[3994]: E1216 13:07:40.824271 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:07:40.881870 sshd[6427]: Connection closed by 10.200.16.10 port 35990 Dec 16 13:07:40.882532 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:40.883000 audit[6424]: USER_END pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.890599 systemd[1]: sshd@20-10.200.4.31:22-10.200.16.10:35990.service: Deactivated successfully. Dec 16 13:07:40.891913 kernel: audit: type=1106 audit(1765890460.883:882): pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.883000 audit[6424]: CRED_DISP pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.897120 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:07:40.897973 kernel: audit: type=1104 audit(1765890460.883:883): pid=6424 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:40.898157 systemd-logind[2482]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:07:40.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.31:22-10.200.16.10:35990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:40.902238 systemd-logind[2482]: Removed session 23. Dec 16 13:07:45.992055 systemd[1]: Started sshd@21-10.200.4.31:22-10.200.16.10:57298.service - OpenSSH per-connection server daemon (10.200.16.10:57298). Dec 16 13:07:45.999348 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:45.999376 kernel: audit: type=1130 audit(1765890465.991:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.31:22-10.200.16.10:57298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:45.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.31:22-10.200.16.10:57298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:46.518000 audit[6440]: USER_ACCT pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.519402 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 57298 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:46.523571 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:46.525027 kernel: audit: type=1101 audit(1765890466.518:886): pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.525097 kernel: audit: type=1103 audit(1765890466.522:887): pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.522000 audit[6440]: CRED_ACQ pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.529512 systemd-logind[2482]: New session 24 of user core. Dec 16 13:07:46.536020 kernel: audit: type=1006 audit(1765890466.522:888): pid=6440 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 13:07:46.522000 audit[6440]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8aad8d70 a2=3 a3=0 items=0 ppid=1 pid=6440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:46.542497 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:07:46.522000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:46.546363 kernel: audit: type=1300 audit(1765890466.522:888): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8aad8d70 a2=3 a3=0 items=0 ppid=1 pid=6440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:46.546408 kernel: audit: type=1327 audit(1765890466.522:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:46.545000 audit[6440]: USER_START pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.552396 kernel: audit: type=1105 audit(1765890466.545:889): pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.546000 audit[6443]: CRED_ACQ pid=6443 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.558020 kernel: audit: type=1103 audit(1765890466.546:890): pid=6443 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.825064 kubelet[3994]: E1216 13:07:46.824018 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:07:46.825664 kubelet[3994]: E1216 13:07:46.825141 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:07:46.861928 sshd[6443]: Connection closed by 10.200.16.10 port 57298 Dec 16 13:07:46.863041 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:46.864000 audit[6440]: USER_END pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.868647 systemd-logind[2482]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:07:46.870634 systemd[1]: sshd@21-10.200.4.31:22-10.200.16.10:57298.service: Deactivated successfully. Dec 16 13:07:46.874341 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:07:46.874860 kernel: audit: type=1106 audit(1765890466.864:891): pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.865000 audit[6440]: CRED_DISP pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.879643 systemd-logind[2482]: Removed session 24. Dec 16 13:07:46.885863 kernel: audit: type=1104 audit(1765890466.865:892): pid=6440 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:46.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.31:22-10.200.16.10:57298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:47.829221 kubelet[3994]: E1216 13:07:47.827775 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:07:48.825579 kubelet[3994]: E1216 13:07:48.825504 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:07:49.826820 kubelet[3994]: E1216 13:07:49.825793 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:07:51.824297 kubelet[3994]: E1216 13:07:51.823952 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:07:51.975152 systemd[1]: Started sshd@22-10.200.4.31:22-10.200.16.10:50766.service - OpenSSH per-connection server daemon (10.200.16.10:50766). Dec 16 13:07:51.985874 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:51.985944 kernel: audit: type=1130 audit(1765890471.974:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.31:22-10.200.16.10:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:51.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.31:22-10.200.16.10:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:52.516000 audit[6478]: USER_ACCT pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.520525 sshd-session[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:52.526201 kernel: audit: type=1101 audit(1765890472.516:895): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.526269 sshd[6478]: Accepted publickey for core from 10.200.16.10 port 50766 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:52.516000 audit[6478]: CRED_ACQ pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.536146 systemd-logind[2482]: New session 25 of user core. Dec 16 13:07:52.539863 kernel: audit: type=1103 audit(1765890472.516:896): pid=6478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.546663 kernel: audit: type=1006 audit(1765890472.516:897): pid=6478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 13:07:52.547053 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:07:52.516000 audit[6478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaefcb200 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:52.557971 kernel: audit: type=1300 audit(1765890472.516:897): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaefcb200 a2=3 a3=0 items=0 ppid=1 pid=6478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:52.516000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:52.575439 kernel: audit: type=1327 audit(1765890472.516:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:52.575510 kernel: audit: type=1105 audit(1765890472.548:898): pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.548000 audit[6478]: USER_START pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.548000 audit[6481]: CRED_ACQ pid=6481 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.585864 kernel: audit: type=1103 audit(1765890472.548:899): pid=6481 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.864875 sshd[6481]: Connection closed by 10.200.16.10 port 50766 Dec 16 13:07:52.865941 sshd-session[6478]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:52.866000 audit[6478]: USER_END pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.870585 systemd[1]: sshd@22-10.200.4.31:22-10.200.16.10:50766.service: Deactivated successfully. Dec 16 13:07:52.873146 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:07:52.876007 systemd-logind[2482]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:07:52.876996 systemd-logind[2482]: Removed session 25. Dec 16 13:07:52.866000 audit[6478]: CRED_DISP pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.885360 kernel: audit: type=1106 audit(1765890472.866:900): pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.885467 kernel: audit: type=1104 audit(1765890472.866:901): pid=6478 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:52.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.31:22-10.200.16.10:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:57.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.31:22-10.200.16.10:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:57.979123 systemd[1]: Started sshd@23-10.200.4.31:22-10.200.16.10:50770.service - OpenSSH per-connection server daemon (10.200.16.10:50770). Dec 16 13:07:57.980552 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:07:57.980605 kernel: audit: type=1130 audit(1765890477.978:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.31:22-10.200.16.10:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:58.502000 audit[6493]: USER_ACCT pid=6493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.511939 kernel: audit: type=1101 audit(1765890478.502:904): pid=6493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.512037 sshd[6493]: Accepted publickey for core from 10.200.16.10 port 50770 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:07:58.513579 sshd-session[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:58.512000 audit[6493]: CRED_ACQ pid=6493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.522869 kernel: audit: type=1103 audit(1765890478.512:905): pid=6493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.523079 kernel: audit: type=1006 audit(1765890478.512:906): pid=6493 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 13:07:58.512000 audit[6493]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffef0c32a0 a2=3 a3=0 items=0 ppid=1 pid=6493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:58.512000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:58.540170 kernel: audit: type=1300 audit(1765890478.512:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffef0c32a0 a2=3 a3=0 items=0 ppid=1 pid=6493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:58.540212 kernel: audit: type=1327 audit(1765890478.512:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:07:58.545284 systemd-logind[2482]: New session 26 of user core. Dec 16 13:07:58.550221 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:07:58.554000 audit[6493]: USER_START pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.565890 kernel: audit: type=1105 audit(1765890478.554:907): pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.557000 audit[6496]: CRED_ACQ pid=6496 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.578865 kernel: audit: type=1103 audit(1765890478.557:908): pid=6496 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.882120 sshd[6496]: Connection closed by 10.200.16.10 port 50770 Dec 16 13:07:58.884553 sshd-session[6493]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:58.886000 audit[6493]: USER_END pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.889652 systemd[1]: sshd@23-10.200.4.31:22-10.200.16.10:50770.service: Deactivated successfully. Dec 16 13:07:58.892718 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:07:58.886000 audit[6493]: CRED_DISP pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.899567 systemd-logind[2482]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:07:58.901375 systemd-logind[2482]: Removed session 26. Dec 16 13:07:58.901754 kernel: audit: type=1106 audit(1765890478.886:909): pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.902695 kernel: audit: type=1104 audit(1765890478.886:910): pid=6493 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:07:58.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.31:22-10.200.16.10:50770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:07:59.615000 audit[6508]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:59.615000 audit[6508]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc950bcc70 a2=0 a3=7ffc950bcc5c items=0 ppid=4104 pid=6508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:59.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:59.621000 audit[6508]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=6508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:07:59.621000 audit[6508]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc950bcc70 a2=0 a3=7ffc950bcc5c items=0 ppid=4104 pid=6508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:07:59.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:07:59.827264 kubelet[3994]: E1216 13:07:59.827219 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:08:00.824391 kubelet[3994]: E1216 13:08:00.824342 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:08:01.828290 kubelet[3994]: E1216 13:08:01.826469 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:08:02.826013 kubelet[3994]: E1216 13:08:02.825909 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:08:02.826533 kubelet[3994]: E1216 13:08:02.826376 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:08:02.826533 kubelet[3994]: E1216 13:08:02.826501 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:08:03.996187 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:08:03.996317 kernel: audit: type=1130 audit(1765890483.990:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.31:22-10.200.16.10:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:03.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.31:22-10.200.16.10:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:03.991356 systemd[1]: Started sshd@24-10.200.4.31:22-10.200.16.10:35916.service - OpenSSH per-connection server daemon (10.200.16.10:35916). Dec 16 13:08:04.514000 audit[6510]: USER_ACCT pid=6510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.519486 sshd-session[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:04.520363 sshd[6510]: Accepted publickey for core from 10.200.16.10 port 35916 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:04.520883 kernel: audit: type=1101 audit(1765890484.514:915): pid=6510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.518000 audit[6510]: CRED_ACQ pid=6510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.525941 kernel: audit: type=1103 audit(1765890484.518:916): pid=6510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.526019 kernel: audit: type=1006 audit(1765890484.518:917): pid=6510 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 13:08:04.518000 audit[6510]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd74413b30 a2=3 a3=0 items=0 ppid=1 pid=6510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:04.532629 kernel: audit: type=1300 audit(1765890484.518:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd74413b30 a2=3 a3=0 items=0 ppid=1 pid=6510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:04.518000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:04.535687 kernel: audit: type=1327 audit(1765890484.518:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:04.536246 systemd-logind[2482]: New session 27 of user core. Dec 16 13:08:04.541082 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:08:04.543000 audit[6510]: USER_START pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.551893 kernel: audit: type=1105 audit(1765890484.543:918): pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.552000 audit[6513]: CRED_ACQ pid=6513 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.558921 kernel: audit: type=1103 audit(1765890484.552:919): pid=6513 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.889950 sshd[6513]: Connection closed by 10.200.16.10 port 35916 Dec 16 13:08:04.890884 sshd-session[6510]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:04.893000 audit[6510]: USER_END pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.901929 kernel: audit: type=1106 audit(1765890484.893:920): pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.901865 systemd[1]: sshd@24-10.200.4.31:22-10.200.16.10:35916.service: Deactivated successfully. Dec 16 13:08:04.904338 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:08:04.893000 audit[6510]: CRED_DISP pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.912196 kernel: audit: type=1104 audit(1765890484.893:921): pid=6510 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:04.911952 systemd-logind[2482]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:08:04.913138 systemd-logind[2482]: Removed session 27. Dec 16 13:08:04.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.31:22-10.200.16.10:35916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:10.012044 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:10.012170 kernel: audit: type=1130 audit(1765890490.002:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.31:22-10.200.16.10:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:10.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.31:22-10.200.16.10:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:10.003039 systemd[1]: Started sshd@25-10.200.4.31:22-10.200.16.10:35920.service - OpenSSH per-connection server daemon (10.200.16.10:35920). Dec 16 13:08:10.527000 audit[6527]: USER_ACCT pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.528871 sshd[6527]: Accepted publickey for core from 10.200.16.10 port 35920 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:10.530583 sshd-session[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:10.529000 audit[6527]: CRED_ACQ pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.540838 kernel: audit: type=1101 audit(1765890490.527:924): pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.540917 kernel: audit: type=1103 audit(1765890490.529:925): pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.543243 systemd-logind[2482]: New session 28 of user core. Dec 16 13:08:10.545631 kernel: audit: type=1006 audit(1765890490.529:926): pid=6527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 13:08:10.529000 audit[6527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff45e60df0 a2=3 a3=0 items=0 ppid=1 pid=6527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:10.550043 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 13:08:10.556873 kernel: audit: type=1300 audit(1765890490.529:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff45e60df0 a2=3 a3=0 items=0 ppid=1 pid=6527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:10.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:10.553000 audit[6527]: USER_START pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.565035 kernel: audit: type=1327 audit(1765890490.529:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:10.565124 kernel: audit: type=1105 audit(1765890490.553:927): pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.558000 audit[6530]: CRED_ACQ pid=6530 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.570354 kernel: audit: type=1103 audit(1765890490.558:928): pid=6530 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.867743 sshd[6530]: Connection closed by 10.200.16.10 port 35920 Dec 16 13:08:10.868313 sshd-session[6527]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:10.868000 audit[6527]: USER_END pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.878921 kernel: audit: type=1106 audit(1765890490.868:929): pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.871771 systemd-logind[2482]: Session 28 logged out. Waiting for processes to exit. Dec 16 13:08:10.873905 systemd[1]: sshd@25-10.200.4.31:22-10.200.16.10:35920.service: Deactivated successfully. Dec 16 13:08:10.876473 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 13:08:10.878808 systemd-logind[2482]: Removed session 28. Dec 16 13:08:10.868000 audit[6527]: CRED_DISP pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:10.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.31:22-10.200.16.10:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:10.884867 kernel: audit: type=1104 audit(1765890490.868:930): pid=6527 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:13.824380 kubelet[3994]: E1216 13:08:13.823327 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-r577f" podUID="dd14feb0-ccbc-4867-9fa9-0c2099e4adc4" Dec 16 13:08:13.824954 kubelet[3994]: E1216 13:08:13.824658 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a" Dec 16 13:08:13.826340 kubelet[3994]: E1216 13:08:13.826296 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76746db8cb-s8hqj" podUID="17414321-ac90-43ed-affc-521db178bc15" Dec 16 13:08:14.823875 kubelet[3994]: E1216 13:08:14.823466 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-556dc6d57-6kbq4" podUID="78647b74-1322-40dc-8769-efb3043691d4" Dec 16 13:08:15.829287 kubelet[3994]: E1216 13:08:15.829241 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-sc2zv" podUID="da16849a-2afd-49e7-91d5-6aafd4f3fe06" Dec 16 13:08:15.980166 systemd[1]: Started sshd@26-10.200.4.31:22-10.200.16.10:50128.service - OpenSSH per-connection server daemon (10.200.16.10:50128). Dec 16 13:08:15.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.31:22-10.200.16.10:50128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:15.981924 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:15.982334 kernel: audit: type=1130 audit(1765890495.979:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.31:22-10.200.16.10:50128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:16.512000 audit[6544]: USER_ACCT pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.519362 sshd[6544]: Accepted publickey for core from 10.200.16.10 port 50128 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:16.520073 kernel: audit: type=1101 audit(1765890496.512:933): pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.520962 sshd-session[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:16.519000 audit[6544]: CRED_ACQ pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.531289 kernel: audit: type=1103 audit(1765890496.519:934): pid=6544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.531370 kernel: audit: type=1006 audit(1765890496.519:935): pid=6544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 13:08:16.519000 audit[6544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd60655f80 a2=3 a3=0 items=0 ppid=1 pid=6544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:16.541282 kernel: audit: type=1300 audit(1765890496.519:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd60655f80 a2=3 a3=0 items=0 ppid=1 pid=6544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:16.541889 systemd-logind[2482]: New session 29 of user core. Dec 16 13:08:16.545791 kernel: audit: type=1327 audit(1765890496.519:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:16.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:16.551437 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 13:08:16.557000 audit[6544]: USER_START pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.566865 kernel: audit: type=1105 audit(1765890496.557:936): pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.567000 audit[6547]: CRED_ACQ pid=6547 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.573875 kernel: audit: type=1103 audit(1765890496.567:937): pid=6547 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.823921 kubelet[3994]: E1216 13:08:16.823737 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ddb857f6f-6zsvz" podUID="e585732d-c5ba-41d4-91da-20d86215882e" Dec 16 13:08:16.865804 sshd[6547]: Connection closed by 10.200.16.10 port 50128 Dec 16 13:08:16.866358 sshd-session[6544]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:16.865000 audit[6544]: USER_END pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.873900 kernel: audit: type=1106 audit(1765890496.865:938): pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.873733 systemd[1]: sshd@26-10.200.4.31:22-10.200.16.10:50128.service: Deactivated successfully. Dec 16 13:08:16.865000 audit[6544]: CRED_DISP pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.876042 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 13:08:16.879597 systemd-logind[2482]: Session 29 logged out. Waiting for processes to exit. Dec 16 13:08:16.881105 systemd-logind[2482]: Removed session 29. Dec 16 13:08:16.881884 kernel: audit: type=1104 audit(1765890496.865:939): pid=6544 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:16.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.31:22-10.200.16.10:50128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:21.976714 systemd[1]: Started sshd@27-10.200.4.31:22-10.200.16.10:34942.service - OpenSSH per-connection server daemon (10.200.16.10:34942). Dec 16 13:08:21.984035 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 13:08:21.984068 kernel: audit: type=1130 audit(1765890501.976:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.31:22-10.200.16.10:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:21.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.31:22-10.200.16.10:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:22.499000 audit[6585]: USER_ACCT pid=6585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.505895 sshd-session[6585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:22.506622 sshd[6585]: Accepted publickey for core from 10.200.16.10 port 34942 ssh2: RSA SHA256:XB2dojyG3S3qIBG43pJfS4qRaZI6Gzjw37XrOBzISwI Dec 16 13:08:22.512200 kernel: audit: type=1101 audit(1765890502.499:942): pid=6585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.512284 kernel: audit: type=1103 audit(1765890502.504:943): pid=6585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.504000 audit[6585]: CRED_ACQ pid=6585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.524017 kernel: audit: type=1006 audit(1765890502.504:944): pid=6585 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Dec 16 13:08:22.529784 kernel: audit: type=1300 audit(1765890502.504:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5a59a760 a2=3 a3=0 items=0 ppid=1 pid=6585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:22.504000 audit[6585]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5a59a760 a2=3 a3=0 items=0 ppid=1 pid=6585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:08:22.504000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:22.531879 kernel: audit: type=1327 audit(1765890502.504:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:08:22.536557 systemd-logind[2482]: New session 30 of user core. Dec 16 13:08:22.541026 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 13:08:22.543000 audit[6585]: USER_START pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.549872 kernel: audit: type=1105 audit(1765890502.543:945): pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.551000 audit[6589]: CRED_ACQ pid=6589 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.561875 kernel: audit: type=1103 audit(1765890502.551:946): pid=6589 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.880879 sshd[6589]: Connection closed by 10.200.16.10 port 34942 Dec 16 13:08:22.882006 sshd-session[6585]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:22.884000 audit[6585]: USER_END pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.899513 kernel: audit: type=1106 audit(1765890502.884:947): pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.898132 systemd[1]: sshd@27-10.200.4.31:22-10.200.16.10:34942.service: Deactivated successfully. Dec 16 13:08:22.900574 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 13:08:22.903920 systemd-logind[2482]: Session 30 logged out. Waiting for processes to exit. Dec 16 13:08:22.905823 systemd-logind[2482]: Removed session 30. Dec 16 13:08:22.885000 audit[6585]: CRED_DISP pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.912867 kernel: audit: type=1104 audit(1765890502.885:948): pid=6585 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 13:08:22.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.31:22-10.200.16.10:34942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:08:24.823406 kubelet[3994]: E1216 13:08:24.823360 3994 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2slmb" podUID="aa48803c-33bc-4da1-94e0-bc256a6f415a"