Dec 13 00:24:55.189730 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 20:55:10 -00 2025 Dec 13 00:24:55.189810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:24:55.189839 kernel: BIOS-provided physical RAM map: Dec 13 00:24:55.189864 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:24:55.189885 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:24:55.189915 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:24:55.189942 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:24:55.189968 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 13 00:24:55.189989 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:24:55.190010 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:24:55.190035 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:24:55.190075 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:24:55.190100 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:24:55.190121 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:24:55.190151 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:24:55.190174 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:24:55.190200 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:24:55.190221 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:24:55.190249 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:24:55.190271 kernel: NX (Execute Disable) protection: active Dec 13 00:24:55.190296 kernel: APIC: Static calls initialized Dec 13 00:24:55.190318 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 13 00:24:55.190345 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 13 00:24:55.190366 kernel: extended physical RAM map: Dec 13 00:24:55.190388 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:24:55.190414 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:24:55.190438 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:24:55.190463 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:24:55.190485 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 13 00:24:55.190517 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 13 00:24:55.190539 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 13 00:24:55.190553 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 13 00:24:55.190563 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 13 00:24:55.190573 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:24:55.190583 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:24:55.190593 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:24:55.190604 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:24:55.190614 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:24:55.190624 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:24:55.190637 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:24:55.190652 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:24:55.190662 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:24:55.190673 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:24:55.190683 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:24:55.190696 kernel: efi: EFI v2.7 by EDK II Dec 13 00:24:55.190707 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 13 00:24:55.190717 kernel: random: crng init done Dec 13 00:24:55.190728 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 13 00:24:55.190738 kernel: secureboot: Secure boot enabled Dec 13 00:24:55.190749 kernel: SMBIOS 2.8 present. Dec 13 00:24:55.190759 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 00:24:55.190770 kernel: DMI: Memory slots populated: 1/1 Dec 13 00:24:55.190780 kernel: Hypervisor detected: KVM Dec 13 00:24:55.190803 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:24:55.190814 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 00:24:55.190825 kernel: kvm-clock: using sched offset of 4991617298 cycles Dec 13 00:24:55.190836 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 00:24:55.190847 kernel: tsc: Detected 2794.748 MHz processor Dec 13 00:24:55.190859 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 00:24:55.190870 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 00:24:55.190881 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:24:55.190892 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 00:24:55.190906 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 00:24:55.190917 kernel: Using GB pages for direct mapping Dec 13 00:24:55.190928 kernel: ACPI: Early table checksum verification disabled Dec 13 00:24:55.190939 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 13 00:24:55.190950 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 00:24:55.190961 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.190972 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.190986 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 13 00:24:55.190997 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.191008 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.191019 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.191030 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:24:55.191054 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 00:24:55.191066 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 13 00:24:55.191080 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 13 00:24:55.191091 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 13 00:24:55.191102 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 13 00:24:55.191113 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 13 00:24:55.191124 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 13 00:24:55.191135 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 13 00:24:55.191146 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 13 00:24:55.191160 kernel: No NUMA configuration found Dec 13 00:24:55.191171 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 13 00:24:55.191182 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 13 00:24:55.191193 kernel: Zone ranges: Dec 13 00:24:55.191204 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 00:24:55.191215 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 13 00:24:55.191225 kernel: Normal empty Dec 13 00:24:55.191236 kernel: Device empty Dec 13 00:24:55.191249 kernel: Movable zone start for each node Dec 13 00:24:55.191260 kernel: Early memory node ranges Dec 13 00:24:55.191271 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 13 00:24:55.191282 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 13 00:24:55.191293 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 13 00:24:55.191304 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 13 00:24:55.191315 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 13 00:24:55.191328 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 13 00:24:55.191339 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 00:24:55.191350 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 13 00:24:55.191361 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 00:24:55.191372 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 00:24:55.191383 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 00:24:55.191394 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 13 00:24:55.191405 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 00:24:55.191631 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 00:24:55.191642 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 00:24:55.191657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 00:24:55.191668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 00:24:55.191679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 00:24:55.191690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 00:24:55.191701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 00:24:55.191714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 00:24:55.191725 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 00:24:55.191737 kernel: TSC deadline timer available Dec 13 00:24:55.191748 kernel: CPU topo: Max. logical packages: 1 Dec 13 00:24:55.191759 kernel: CPU topo: Max. logical dies: 1 Dec 13 00:24:55.191777 kernel: CPU topo: Max. dies per package: 1 Dec 13 00:24:55.191797 kernel: CPU topo: Max. threads per core: 1 Dec 13 00:24:55.191809 kernel: CPU topo: Num. cores per package: 4 Dec 13 00:24:55.191821 kernel: CPU topo: Num. threads per package: 4 Dec 13 00:24:55.191831 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 13 00:24:55.191845 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 00:24:55.191856 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 00:24:55.191868 kernel: kvm-guest: setup PV sched yield Dec 13 00:24:55.191881 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 00:24:55.191894 kernel: Booting paravirtualized kernel on KVM Dec 13 00:24:55.191906 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 00:24:55.191918 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 00:24:55.191930 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 13 00:24:55.191941 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 13 00:24:55.191952 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 00:24:55.191963 kernel: kvm-guest: PV spinlocks enabled Dec 13 00:24:55.191977 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 00:24:55.191990 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:24:55.192002 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 00:24:55.192015 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 00:24:55.192027 kernel: Fallback order for Node 0: 0 Dec 13 00:24:55.192039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 13 00:24:55.192064 kernel: Policy zone: DMA32 Dec 13 00:24:55.192079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 00:24:55.192091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 00:24:55.192102 kernel: ftrace: allocating 40103 entries in 157 pages Dec 13 00:24:55.192114 kernel: ftrace: allocated 157 pages with 5 groups Dec 13 00:24:55.192125 kernel: Dynamic Preempt: voluntary Dec 13 00:24:55.192137 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 00:24:55.192149 kernel: rcu: RCU event tracing is enabled. Dec 13 00:24:55.192163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 00:24:55.192175 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 00:24:55.192186 kernel: Rude variant of Tasks RCU enabled. Dec 13 00:24:55.192198 kernel: Tracing variant of Tasks RCU enabled. Dec 13 00:24:55.192209 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 00:24:55.192221 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 00:24:55.192232 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:24:55.192244 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:24:55.192258 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:24:55.192270 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 00:24:55.192281 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 00:24:55.192293 kernel: Console: colour dummy device 80x25 Dec 13 00:24:55.192304 kernel: printk: legacy console [ttyS0] enabled Dec 13 00:24:55.192316 kernel: ACPI: Core revision 20240827 Dec 13 00:24:55.192328 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 00:24:55.192342 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 00:24:55.192353 kernel: x2apic enabled Dec 13 00:24:55.192365 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 00:24:55.192377 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 00:24:55.192389 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 00:24:55.192400 kernel: kvm-guest: setup PV IPIs Dec 13 00:24:55.192412 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 00:24:55.192426 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:24:55.192438 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 00:24:55.192449 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 00:24:55.192461 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 00:24:55.192472 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 00:24:55.192484 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 00:24:55.192496 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 00:24:55.192510 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 13 00:24:55.192521 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 00:24:55.192533 kernel: active return thunk: retbleed_return_thunk Dec 13 00:24:55.192545 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 00:24:55.192557 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 00:24:55.192568 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 00:24:55.192580 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 00:24:55.192595 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 00:24:55.192607 kernel: active return thunk: srso_return_thunk Dec 13 00:24:55.192619 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 00:24:55.192630 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 00:24:55.192642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 00:24:55.192653 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 00:24:55.192665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 00:24:55.192679 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 00:24:55.192690 kernel: Freeing SMP alternatives memory: 32K Dec 13 00:24:55.192702 kernel: pid_max: default: 32768 minimum: 301 Dec 13 00:24:55.192713 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 00:24:55.192725 kernel: landlock: Up and running. Dec 13 00:24:55.192736 kernel: SELinux: Initializing. Dec 13 00:24:55.192748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:24:55.192762 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:24:55.192773 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 00:24:55.192785 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 00:24:55.192806 kernel: ... version: 0 Dec 13 00:24:55.192817 kernel: ... bit width: 48 Dec 13 00:24:55.192829 kernel: ... generic registers: 6 Dec 13 00:24:55.192840 kernel: ... value mask: 0000ffffffffffff Dec 13 00:24:55.192854 kernel: ... max period: 00007fffffffffff Dec 13 00:24:55.192865 kernel: ... fixed-purpose events: 0 Dec 13 00:24:55.192877 kernel: ... event mask: 000000000000003f Dec 13 00:24:55.192888 kernel: signal: max sigframe size: 1776 Dec 13 00:24:55.192900 kernel: rcu: Hierarchical SRCU implementation. Dec 13 00:24:55.192912 kernel: rcu: Max phase no-delay instances is 400. Dec 13 00:24:55.192924 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 00:24:55.192937 kernel: smp: Bringing up secondary CPUs ... Dec 13 00:24:55.192949 kernel: smpboot: x86: Booting SMP configuration: Dec 13 00:24:55.192960 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 00:24:55.192971 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 00:24:55.192982 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 00:24:55.192995 kernel: Memory: 2425600K/2552216K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15596K init, 2444K bss, 120680K reserved, 0K cma-reserved) Dec 13 00:24:55.193006 kernel: devtmpfs: initialized Dec 13 00:24:55.193020 kernel: x86/mm: Memory block size: 128MB Dec 13 00:24:55.193032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 13 00:24:55.193043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 13 00:24:55.193067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 00:24:55.193079 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 00:24:55.193090 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 00:24:55.193101 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 00:24:55.193116 kernel: audit: initializing netlink subsys (disabled) Dec 13 00:24:55.193127 kernel: audit: type=2000 audit(1765585492.347:1): state=initialized audit_enabled=0 res=1 Dec 13 00:24:55.193139 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 00:24:55.193150 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 00:24:55.193162 kernel: cpuidle: using governor menu Dec 13 00:24:55.193174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 00:24:55.193186 kernel: dca service started, version 1.12.1 Dec 13 00:24:55.193199 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 13 00:24:55.193211 kernel: PCI: Using configuration type 1 for base access Dec 13 00:24:55.193222 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 00:24:55.193234 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 00:24:55.193246 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 00:24:55.193257 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 00:24:55.193269 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 00:24:55.193283 kernel: ACPI: Added _OSI(Module Device) Dec 13 00:24:55.193294 kernel: ACPI: Added _OSI(Processor Device) Dec 13 00:24:55.193306 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 00:24:55.193317 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 00:24:55.193328 kernel: ACPI: Interpreter enabled Dec 13 00:24:55.193340 kernel: ACPI: PM: (supports S0 S5) Dec 13 00:24:55.193351 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 00:24:55.193362 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 00:24:55.193376 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 00:24:55.193388 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 00:24:55.193399 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 00:24:55.193732 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 00:24:55.193952 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 00:24:55.194172 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 00:24:55.194193 kernel: PCI host bridge to bus 0000:00 Dec 13 00:24:55.194390 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 00:24:55.194576 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 00:24:55.194763 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 00:24:55.194967 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 00:24:55.195181 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 00:24:55.195367 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:24:55.195555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 00:24:55.195797 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 13 00:24:55.196084 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 13 00:24:55.196396 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 13 00:24:55.196709 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 13 00:24:55.196991 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 13 00:24:55.197199 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 00:24:55.197401 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 00:24:55.197611 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 13 00:24:55.197815 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 13 00:24:55.197998 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 00:24:55.198204 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 13 00:24:55.198381 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 13 00:24:55.198549 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 13 00:24:55.198719 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 00:24:55.198911 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 13 00:24:55.199093 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 13 00:24:55.199261 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 13 00:24:55.199427 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 00:24:55.199591 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 13 00:24:55.199767 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 13 00:24:55.199945 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 00:24:55.200134 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 13 00:24:55.200300 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 13 00:24:55.200464 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 13 00:24:55.200635 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 13 00:24:55.200812 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 13 00:24:55.200825 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 00:24:55.200834 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 00:24:55.200843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 00:24:55.200852 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 00:24:55.200860 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 00:24:55.200869 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 00:24:55.200880 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 00:24:55.200889 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 00:24:55.200897 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 00:24:55.200906 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 00:24:55.200915 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 00:24:55.200924 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 00:24:55.200932 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 00:24:55.200943 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 00:24:55.200952 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 00:24:55.200961 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 00:24:55.200969 kernel: iommu: Default domain type: Translated Dec 13 00:24:55.200978 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 00:24:55.200987 kernel: efivars: Registered efivars operations Dec 13 00:24:55.200996 kernel: PCI: Using ACPI for IRQ routing Dec 13 00:24:55.201006 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 00:24:55.201015 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 13 00:24:55.201024 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 13 00:24:55.201033 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 13 00:24:55.201041 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 13 00:24:55.201061 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 13 00:24:55.201265 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 00:24:55.201564 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 00:24:55.201757 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 00:24:55.201770 kernel: vgaarb: loaded Dec 13 00:24:55.201782 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 00:24:55.201801 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 00:24:55.201812 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 00:24:55.201823 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 00:24:55.201838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 00:24:55.201849 kernel: pnp: PnP ACPI init Dec 13 00:24:55.202062 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 00:24:55.202078 kernel: pnp: PnP ACPI: found 6 devices Dec 13 00:24:55.202089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 00:24:55.202100 kernel: NET: Registered PF_INET protocol family Dec 13 00:24:55.202111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 00:24:55.202126 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 00:24:55.202137 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 00:24:55.202148 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 00:24:55.202159 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 00:24:55.202170 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 00:24:55.202181 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:24:55.202193 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:24:55.202204 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 00:24:55.202215 kernel: NET: Registered PF_XDP protocol family Dec 13 00:24:55.202400 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 13 00:24:55.202590 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 13 00:24:55.202764 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 00:24:55.202942 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 00:24:55.203113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 00:24:55.203299 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 00:24:55.203581 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 00:24:55.203870 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:24:55.203890 kernel: PCI: CLS 0 bytes, default 64 Dec 13 00:24:55.203903 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:24:55.203916 kernel: Initialise system trusted keyrings Dec 13 00:24:55.203933 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 00:24:55.203946 kernel: Key type asymmetric registered Dec 13 00:24:55.203958 kernel: Asymmetric key parser 'x509' registered Dec 13 00:24:55.203987 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 00:24:55.204001 kernel: io scheduler mq-deadline registered Dec 13 00:24:55.204014 kernel: io scheduler kyber registered Dec 13 00:24:55.204027 kernel: io scheduler bfq registered Dec 13 00:24:55.204040 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 00:24:55.204072 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 00:24:55.204101 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 00:24:55.204113 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 00:24:55.204126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 00:24:55.204137 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 00:24:55.204149 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 00:24:55.204164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 00:24:55.204176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 00:24:55.204402 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 00:24:55.204422 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 00:24:55.204620 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 00:24:55.204831 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T00:24:53 UTC (1765585493) Dec 13 00:24:55.205037 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 00:24:55.205070 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 00:24:55.205082 kernel: efifb: probing for efifb Dec 13 00:24:55.205095 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 00:24:55.205106 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 00:24:55.205118 kernel: efifb: scrolling: redraw Dec 13 00:24:55.205130 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 00:24:55.205147 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 00:24:55.205161 kernel: fb0: EFI VGA frame buffer device Dec 13 00:24:55.205173 kernel: pstore: Using crash dump compression: deflate Dec 13 00:24:55.205185 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 00:24:55.205198 kernel: NET: Registered PF_INET6 protocol family Dec 13 00:24:55.205212 kernel: Segment Routing with IPv6 Dec 13 00:24:55.205224 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 00:24:55.205237 kernel: NET: Registered PF_PACKET protocol family Dec 13 00:24:55.205248 kernel: Key type dns_resolver registered Dec 13 00:24:55.205260 kernel: IPI shorthand broadcast: enabled Dec 13 00:24:55.205272 kernel: sched_clock: Marking stable (1699002716, 257301756)->(2086339320, -130034848) Dec 13 00:24:55.205287 kernel: registered taskstats version 1 Dec 13 00:24:55.205303 kernel: Loading compiled-in X.509 certificates Dec 13 00:24:55.205315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 199a9f6885410acbf0a1b178e5562253352ca03c' Dec 13 00:24:55.205327 kernel: Demotion targets for Node 0: null Dec 13 00:24:55.205339 kernel: Key type .fscrypt registered Dec 13 00:24:55.205351 kernel: Key type fscrypt-provisioning registered Dec 13 00:24:55.205363 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 00:24:55.205375 kernel: ima: Allocated hash algorithm: sha1 Dec 13 00:24:55.205389 kernel: ima: No architecture policies found Dec 13 00:24:55.205402 kernel: clk: Disabling unused clocks Dec 13 00:24:55.205414 kernel: Freeing unused kernel image (initmem) memory: 15596K Dec 13 00:24:55.205427 kernel: Write protecting the kernel read-only data: 47104k Dec 13 00:24:55.205439 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 13 00:24:55.205452 kernel: Run /init as init process Dec 13 00:24:55.205464 kernel: with arguments: Dec 13 00:24:55.205480 kernel: /init Dec 13 00:24:55.205492 kernel: with environment: Dec 13 00:24:55.205504 kernel: HOME=/ Dec 13 00:24:55.205516 kernel: TERM=linux Dec 13 00:24:55.205528 kernel: SCSI subsystem initialized Dec 13 00:24:55.205540 kernel: libata version 3.00 loaded. Dec 13 00:24:55.205767 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 00:24:55.205800 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 00:24:55.206011 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 13 00:24:55.206235 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 13 00:24:55.206445 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 00:24:55.206676 kernel: scsi host0: ahci Dec 13 00:24:55.206911 kernel: scsi host1: ahci Dec 13 00:24:55.207154 kernel: scsi host2: ahci Dec 13 00:24:55.207377 kernel: scsi host3: ahci Dec 13 00:24:55.207602 kernel: scsi host4: ahci Dec 13 00:24:55.207949 kernel: scsi host5: ahci Dec 13 00:24:55.207967 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 13 00:24:55.207984 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 13 00:24:55.207996 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 13 00:24:55.208008 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 13 00:24:55.208024 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 13 00:24:55.208038 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 13 00:24:55.208077 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 00:24:55.208090 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 00:24:55.208105 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 00:24:55.208117 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 00:24:55.208128 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 00:24:55.208140 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 00:24:55.208151 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:24:55.208163 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 00:24:55.208174 kernel: ata3.00: applying bridge limits Dec 13 00:24:55.208188 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:24:55.208198 kernel: ata3.00: configured for UDMA/100 Dec 13 00:24:55.208528 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 00:24:55.208860 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 00:24:55.209187 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 00:24:55.209211 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 00:24:55.209238 kernel: GPT:16515071 != 27000831 Dec 13 00:24:55.209257 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 00:24:55.209278 kernel: GPT:16515071 != 27000831 Dec 13 00:24:55.209298 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 00:24:55.209317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 00:24:55.209650 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 00:24:55.209680 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 00:24:55.210022 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 00:24:55.210077 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 00:24:55.210095 kernel: device-mapper: uevent: version 1.0.3 Dec 13 00:24:55.210108 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 00:24:55.210119 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 13 00:24:55.210132 kernel: raid6: avx2x4 gen() 22877 MB/s Dec 13 00:24:55.210143 kernel: raid6: avx2x2 gen() 21585 MB/s Dec 13 00:24:55.210164 kernel: raid6: avx2x1 gen() 18086 MB/s Dec 13 00:24:55.210179 kernel: raid6: using algorithm avx2x4 gen() 22877 MB/s Dec 13 00:24:55.210190 kernel: raid6: .... xor() 6511 MB/s, rmw enabled Dec 13 00:24:55.210202 kernel: raid6: using avx2x2 recovery algorithm Dec 13 00:24:55.210213 kernel: xor: automatically using best checksumming function avx Dec 13 00:24:55.210231 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 00:24:55.210253 kernel: BTRFS: device fsid 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Dec 13 00:24:55.210274 kernel: BTRFS info (device dm-0): first mount of filesystem 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d Dec 13 00:24:55.210286 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:24:55.210298 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 00:24:55.210310 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 00:24:55.210322 kernel: loop: module loaded Dec 13 00:24:55.210334 kernel: loop0: detected capacity change from 0 to 100528 Dec 13 00:24:55.210346 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 00:24:55.210362 systemd[1]: Successfully made /usr/ read-only. Dec 13 00:24:55.210378 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:24:55.210391 systemd[1]: Detected virtualization kvm. Dec 13 00:24:55.210404 systemd[1]: Detected architecture x86-64. Dec 13 00:24:55.210416 systemd[1]: Running in initrd. Dec 13 00:24:55.210428 systemd[1]: No hostname configured, using default hostname. Dec 13 00:24:55.210443 systemd[1]: Hostname set to . Dec 13 00:24:55.210456 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:24:55.210468 systemd[1]: Queued start job for default target initrd.target. Dec 13 00:24:55.210481 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:24:55.210493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:24:55.210507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:24:55.210523 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 00:24:55.210536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:24:55.210549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 00:24:55.210562 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 00:24:55.210575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:24:55.210588 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:24:55.210602 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:24:55.210626 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:24:55.210639 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:24:55.210660 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:24:55.210686 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:24:55.210700 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:24:55.210713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:24:55.210728 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:24:55.210741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 00:24:55.210753 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 00:24:55.210765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:24:55.210777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:24:55.210798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:24:55.210811 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:24:55.210827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 00:24:55.210840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 00:24:55.210853 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:24:55.210865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 00:24:55.210887 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 00:24:55.210910 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 00:24:55.210935 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:24:55.210959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:24:55.210981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:24:55.211003 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 00:24:55.211030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:24:55.211069 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 00:24:55.211092 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:24:55.211153 systemd-journald[315]: Collecting audit messages is enabled. Dec 13 00:24:55.211199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 00:24:55.211221 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:24:55.211244 kernel: audit: type=1130 audit(1765585495.196:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.211265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:24:55.211286 kernel: Bridge firewalling registered Dec 13 00:24:55.211308 systemd-journald[315]: Journal started Dec 13 00:24:55.211342 systemd-journald[315]: Runtime Journal (/run/log/journal/dff742e9eb9f4d408f8328b885e33d22) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:24:55.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.211798 systemd-modules-load[318]: Inserted module 'br_netfilter' Dec 13 00:24:55.224521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:24:55.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.231099 kernel: audit: type=1130 audit(1765585495.227:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.231163 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:24:55.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.237569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:24:55.245304 kernel: audit: type=1130 audit(1765585495.233:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.245332 kernel: audit: type=1130 audit(1765585495.237:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.245410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:24:55.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.253297 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 00:24:55.257698 kernel: audit: type=1130 audit(1765585495.249:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.255326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:24:55.270918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:24:55.281429 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 00:24:55.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.287015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:24:55.293640 kernel: audit: type=1130 audit(1765585495.287:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.293669 kernel: audit: type=1334 audit(1765585495.287:8): prog-id=6 op=LOAD Dec 13 00:24:55.287000 audit: BPF prog-id=6 op=LOAD Dec 13 00:24:55.289993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:24:55.298040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:24:55.305989 kernel: audit: type=1130 audit(1765585495.300:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.307345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:24:55.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.313447 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 00:24:55.318474 kernel: audit: type=1130 audit(1765585495.311:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.346654 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:24:55.367781 systemd-resolved[352]: Positive Trust Anchors: Dec 13 00:24:55.367808 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:24:55.367812 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:24:55.367843 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:24:55.394986 systemd-resolved[352]: Defaulting to hostname 'linux'. Dec 13 00:24:55.396306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:24:55.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.400182 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:24:55.487096 kernel: Loading iSCSI transport class v2.0-870. Dec 13 00:24:55.503091 kernel: iscsi: registered transport (tcp) Dec 13 00:24:55.528223 kernel: iscsi: registered transport (qla4xxx) Dec 13 00:24:55.528305 kernel: QLogic iSCSI HBA Driver Dec 13 00:24:55.562527 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:24:55.601115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:24:55.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.603497 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:24:55.674267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 00:24:55.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.679829 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 00:24:55.683723 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 00:24:55.739763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:24:55.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.741000 audit: BPF prog-id=7 op=LOAD Dec 13 00:24:55.741000 audit: BPF prog-id=8 op=LOAD Dec 13 00:24:55.742313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:24:55.783908 systemd-udevd[596]: Using default interface naming scheme 'v257'. Dec 13 00:24:55.797156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:24:55.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.802837 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 00:24:55.830864 dracut-pre-trigger[665]: rd.md=0: removing MD RAID activation Dec 13 00:24:55.843190 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:24:55.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.849000 audit: BPF prog-id=9 op=LOAD Dec 13 00:24:55.850658 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:24:55.873230 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:24:55.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.875112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:24:55.931433 systemd-networkd[725]: lo: Link UP Dec 13 00:24:55.931443 systemd-networkd[725]: lo: Gained carrier Dec 13 00:24:55.934449 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:24:55.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.938548 systemd[1]: Reached target network.target - Network. Dec 13 00:24:55.975299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:24:55.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:55.988039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 00:24:56.040917 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 00:24:56.063593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 00:24:56.081083 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 00:24:56.094339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 00:24:56.103088 kernel: AES CTR mode by8 optimization enabled Dec 13 00:24:56.110599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:24:56.118923 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:24:56.130222 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 00:24:56.118934 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:24:56.119202 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 00:24:56.121547 systemd-networkd[725]: eth0: Link UP Dec 13 00:24:56.123113 systemd-networkd[725]: eth0: Gained carrier Dec 13 00:24:56.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:56.123122 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:24:56.135356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:24:56.135472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:24:56.141634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:24:56.144743 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:24:56.145595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:24:56.164292 disk-uuid[827]: Primary Header is updated. Dec 13 00:24:56.164292 disk-uuid[827]: Secondary Entries is updated. Dec 13 00:24:56.164292 disk-uuid[827]: Secondary Header is updated. Dec 13 00:24:56.167734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:24:56.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:56.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:56.167865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:24:56.187142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:24:56.220389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:24:56.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:56.243317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 00:24:56.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:56.245673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:24:56.246586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:24:56.251435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:24:56.258938 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 00:24:56.288110 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:24:56.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.203911 disk-uuid[832]: Warning: The kernel is still using the old partition table. Dec 13 00:24:57.203911 disk-uuid[832]: The new table will be used at the next reboot or after you Dec 13 00:24:57.203911 disk-uuid[832]: run partprobe(8) or kpartx(8) Dec 13 00:24:57.203911 disk-uuid[832]: The operation has completed successfully. Dec 13 00:24:57.216620 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 00:24:57.216767 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 00:24:57.229701 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 00:24:57.229726 kernel: audit: type=1130 audit(1765585497.219:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.229756 kernel: audit: type=1131 audit(1765585497.219:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.222161 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 00:24:57.257961 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Dec 13 00:24:57.258032 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:24:57.258069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:24:57.263091 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:24:57.263153 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:24:57.272069 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:24:57.272518 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 00:24:57.278793 kernel: audit: type=1130 audit(1765585497.274:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.275501 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 00:24:57.413068 ignition[882]: Ignition 2.24.0 Dec 13 00:24:57.413086 ignition[882]: Stage: fetch-offline Dec 13 00:24:57.413145 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:57.413160 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:57.413254 ignition[882]: parsed url from cmdline: "" Dec 13 00:24:57.413258 ignition[882]: no config URL provided Dec 13 00:24:57.413364 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 00:24:57.413375 ignition[882]: no config at "/usr/lib/ignition/user.ign" Dec 13 00:24:57.413427 ignition[882]: op(1): [started] loading QEMU firmware config module Dec 13 00:24:57.413432 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 00:24:57.421714 ignition[882]: op(1): [finished] loading QEMU firmware config module Dec 13 00:24:57.504934 ignition[882]: parsing config with SHA512: 90cded9f9b86fb0180aa4fb72c22b21c9c62d40780dfd1c2d179637844900f0c3dd50740df6c6a48c9d0dab1efc17dd7bfd0b800a78fbca80070f034e0bc0d6a Dec 13 00:24:57.510590 unknown[882]: fetched base config from "system" Dec 13 00:24:57.511339 ignition[882]: fetch-offline: fetch-offline passed Dec 13 00:24:57.510610 unknown[882]: fetched user config from "qemu" Dec 13 00:24:57.511417 ignition[882]: Ignition finished successfully Dec 13 00:24:57.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.517071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:24:57.524656 kernel: audit: type=1130 audit(1765585497.519:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.520156 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 00:24:57.521139 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 00:24:57.555329 ignition[892]: Ignition 2.24.0 Dec 13 00:24:57.555343 ignition[892]: Stage: kargs Dec 13 00:24:57.555502 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:57.555514 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:57.556649 ignition[892]: kargs: kargs passed Dec 13 00:24:57.556700 ignition[892]: Ignition finished successfully Dec 13 00:24:57.564214 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 00:24:57.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.566187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 00:24:57.574354 kernel: audit: type=1130 audit(1765585497.564:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.593448 ignition[899]: Ignition 2.24.0 Dec 13 00:24:57.593460 ignition[899]: Stage: disks Dec 13 00:24:57.593602 ignition[899]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:57.593612 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:57.594414 ignition[899]: disks: disks passed Dec 13 00:24:57.594458 ignition[899]: Ignition finished successfully Dec 13 00:24:57.600236 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 00:24:57.605108 kernel: audit: type=1130 audit(1765585497.600:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.601171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 00:24:57.605401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 00:24:57.616215 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:24:57.616938 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:24:57.620573 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:24:57.626076 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 00:24:57.668331 systemd-fsck[908]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 00:24:57.676598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 00:24:57.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.681840 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 00:24:57.686821 kernel: audit: type=1130 audit(1765585497.680:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:57.803084 kernel: EXT4-fs (vda9): mounted filesystem fc518408-2cc6-461e-9cc3-fcafcb4d05ba r/w with ordered data mode. Quota mode: none. Dec 13 00:24:57.803709 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 00:24:57.804975 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 00:24:57.808375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:24:57.812864 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 00:24:57.816174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 00:24:57.816236 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 00:24:57.833306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Dec 13 00:24:57.833335 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:24:57.833347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:24:57.833359 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:24:57.833374 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:24:57.816274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:24:57.821423 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 00:24:57.835661 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 00:24:57.841291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:24:57.973167 systemd-networkd[725]: eth0: Gained IPv6LL Dec 13 00:24:58.013827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 00:24:58.022537 kernel: audit: type=1130 audit(1765585498.015:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.017186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 00:24:58.022573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 00:24:58.042075 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:24:58.056166 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 00:24:58.062766 kernel: audit: type=1130 audit(1765585498.057:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.069610 ignition[1014]: INFO : Ignition 2.24.0 Dec 13 00:24:58.069610 ignition[1014]: INFO : Stage: mount Dec 13 00:24:58.072376 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:58.072376 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:58.072376 ignition[1014]: INFO : mount: mount passed Dec 13 00:24:58.072376 ignition[1014]: INFO : Ignition finished successfully Dec 13 00:24:58.085615 kernel: audit: type=1130 audit(1765585498.078:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:58.075176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 00:24:58.080152 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 00:24:58.247858 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 00:24:58.249555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:24:58.270366 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Dec 13 00:24:58.270407 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:24:58.270420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:24:58.275430 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:24:58.275454 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:24:58.277120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:24:58.311426 ignition[1042]: INFO : Ignition 2.24.0 Dec 13 00:24:58.311426 ignition[1042]: INFO : Stage: files Dec 13 00:24:58.314147 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:58.314147 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:58.314147 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Dec 13 00:24:58.319958 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 00:24:58.319958 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 00:24:58.328286 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 00:24:58.330812 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 00:24:58.330812 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 00:24:58.330812 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:24:58.329082 unknown[1042]: wrote ssh authorized keys file for user: core Dec 13 00:24:58.339742 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 13 00:24:58.371166 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 00:24:58.415872 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 13 00:24:58.415872 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:24:58.422502 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:24:58.443149 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 13 00:24:58.888019 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 00:24:59.378452 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 13 00:24:59.378452 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 00:24:59.384980 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 00:24:59.416978 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:24:59.421490 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:24:59.424129 ignition[1042]: INFO : files: files passed Dec 13 00:24:59.424129 ignition[1042]: INFO : Ignition finished successfully Dec 13 00:24:59.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.425620 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 00:24:59.428740 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 00:24:59.436796 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 00:24:59.457621 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 00:24:59.457789 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 00:24:59.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.465728 initrd-setup-root-after-ignition[1074]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 00:24:59.469949 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:24:59.469949 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:24:59.475442 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:24:59.476378 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:24:59.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.477510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 00:24:59.484016 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 00:24:59.521816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 00:24:59.523555 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 00:24:59.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.524755 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 00:24:59.530749 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 00:24:59.531644 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 00:24:59.532497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 00:24:59.572289 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:24:59.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.577479 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 00:24:59.599030 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:24:59.599302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:24:59.600541 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:24:59.605617 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 00:24:59.608935 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 00:24:59.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.609179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:24:59.614550 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 00:24:59.618185 systemd[1]: Stopped target basic.target - Basic System. Dec 13 00:24:59.619225 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 00:24:59.623137 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:24:59.626774 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 00:24:59.630125 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:24:59.633813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 00:24:59.636975 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:24:59.640137 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 00:24:59.643923 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 00:24:59.647077 systemd[1]: Stopped target swap.target - Swaps. Dec 13 00:24:59.650083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 00:24:59.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.650285 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:24:59.655198 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:24:59.656558 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:24:59.660760 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 00:24:59.663778 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:24:59.667359 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 00:24:59.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.667495 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 00:24:59.672663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 00:24:59.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.672810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:24:59.676363 systemd[1]: Stopped target paths.target - Path Units. Dec 13 00:24:59.677547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 00:24:59.684128 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:24:59.684841 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 00:24:59.688993 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 00:24:59.691743 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 00:24:59.691831 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:24:59.694589 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 00:24:59.694680 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:24:59.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.697689 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 00:24:59.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.697770 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:24:59.700660 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 00:24:59.700803 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:24:59.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.703715 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 00:24:59.703820 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 00:24:59.707686 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 00:24:59.710418 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 00:24:59.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.710536 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:24:59.714703 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 00:24:59.719641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 00:24:59.719778 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:24:59.725041 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 00:24:59.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.725218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:24:59.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.747752 ignition[1100]: INFO : Ignition 2.24.0 Dec 13 00:24:59.747752 ignition[1100]: INFO : Stage: umount Dec 13 00:24:59.747752 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:24:59.747752 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:24:59.747752 ignition[1100]: INFO : umount: umount passed Dec 13 00:24:59.747752 ignition[1100]: INFO : Ignition finished successfully Dec 13 00:24:59.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.728820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 00:24:59.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.729005 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:24:59.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.742305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 00:24:59.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.742499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 00:24:59.747817 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 00:24:59.747963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 00:24:59.751562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 00:24:59.755437 systemd[1]: Stopped target network.target - Network. Dec 13 00:24:59.757996 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 00:24:59.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.758089 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 00:24:59.760883 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 00:24:59.760949 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 00:24:59.764575 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 00:24:59.764640 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 00:24:59.767618 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 00:24:59.767665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 00:24:59.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.797000 audit: BPF prog-id=6 op=UNLOAD Dec 13 00:24:59.770933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 00:24:59.773896 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 00:24:59.783421 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 00:24:59.783559 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 00:24:59.795263 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 00:24:59.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.795387 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 00:24:59.800981 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 00:24:59.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.801758 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 00:24:59.801802 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:24:59.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.805817 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 00:24:59.808348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 00:24:59.808428 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:24:59.812003 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 00:24:59.833000 audit: BPF prog-id=9 op=UNLOAD Dec 13 00:24:59.812079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:24:59.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.816124 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 00:24:59.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.816212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 00:24:59.822260 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:24:59.826498 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 00:24:59.833731 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 00:24:59.837136 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 00:24:59.837225 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 00:24:59.853081 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 00:24:59.854754 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:24:59.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.859005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 00:24:59.859084 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 00:24:59.862390 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 00:24:59.862430 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:24:59.866996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 00:24:59.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.867102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:24:59.871447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 00:24:59.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.871535 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 00:24:59.876106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 00:24:59.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.876168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:24:59.881962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 00:24:59.883866 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 00:24:59.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.883929 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:24:59.887394 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 00:24:59.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.887454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:24:59.890914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:24:59.890963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:24:59.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.892071 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 00:24:59.900169 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 00:24:59.908732 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 00:24:59.908894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 00:24:59.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:24:59.909862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 00:24:59.917752 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 00:24:59.949693 systemd[1]: Switching root. Dec 13 00:24:59.992900 systemd-journald[315]: Journal stopped Dec 13 00:25:01.711314 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Dec 13 00:25:01.711379 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 00:25:01.711397 kernel: SELinux: policy capability open_perms=1 Dec 13 00:25:01.711412 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 00:25:01.711424 kernel: SELinux: policy capability always_check_network=0 Dec 13 00:25:01.711441 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 00:25:01.711454 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 00:25:01.711466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 00:25:01.711478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 00:25:01.711493 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 00:25:01.711506 systemd[1]: Successfully loaded SELinux policy in 68.607ms. Dec 13 00:25:01.711529 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.738ms. Dec 13 00:25:01.711543 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:25:01.711556 systemd[1]: Detected virtualization kvm. Dec 13 00:25:01.711569 systemd[1]: Detected architecture x86-64. Dec 13 00:25:01.711582 systemd[1]: Detected first boot. Dec 13 00:25:01.711601 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:25:01.711622 zram_generator::config[1145]: No configuration found. Dec 13 00:25:01.711638 kernel: Guest personality initialized and is inactive Dec 13 00:25:01.711651 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 13 00:25:01.711663 kernel: Initialized host personality Dec 13 00:25:01.711675 kernel: NET: Registered PF_VSOCK protocol family Dec 13 00:25:01.711688 systemd[1]: Populated /etc with preset unit settings. Dec 13 00:25:01.711703 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 00:25:01.711715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 00:25:01.711728 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 00:25:01.711745 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 00:25:01.711760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 00:25:01.711773 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 00:25:01.711791 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 00:25:01.711810 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 00:25:01.711826 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 00:25:01.711842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 00:25:01.711859 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 00:25:01.711875 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:25:01.711891 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:25:01.711907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 00:25:01.711925 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 00:25:01.711941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 00:25:01.711954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:25:01.711968 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 00:25:01.711981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:25:01.711994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:25:01.712009 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 00:25:01.712022 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 00:25:01.712038 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 00:25:01.712064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 00:25:01.712077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:25:01.712091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:25:01.712108 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 00:25:01.712123 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:25:01.712135 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:25:01.712148 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 00:25:01.712162 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 00:25:01.712175 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 00:25:01.712187 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:25:01.712201 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 00:25:01.712216 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:25:01.712229 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 00:25:01.712242 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 00:25:01.712255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:25:01.712268 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:25:01.712280 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 00:25:01.712293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 00:25:01.712308 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 00:25:01.712321 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 00:25:01.712333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:01.712346 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 00:25:01.712359 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 00:25:01.712374 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 00:25:01.712395 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 00:25:01.712410 systemd[1]: Reached target machines.target - Containers. Dec 13 00:25:01.712423 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 00:25:01.712436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:01.712449 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:25:01.712464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 00:25:01.712477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:01.712491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:01.712503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:01.712516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 00:25:01.712530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:01.712545 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 00:25:01.712558 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 00:25:01.712572 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 00:25:01.712585 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 00:25:01.712600 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 00:25:01.712613 kernel: fuse: init (API version 7.41) Dec 13 00:25:01.712634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:01.712647 kernel: ACPI: bus type drm_connector registered Dec 13 00:25:01.712660 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:25:01.712674 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:25:01.712687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:25:01.712702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 00:25:01.712715 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 00:25:01.712728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:25:01.712741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:01.712771 systemd-journald[1231]: Collecting audit messages is enabled. Dec 13 00:25:01.712795 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 00:25:01.712811 systemd-journald[1231]: Journal started Dec 13 00:25:01.712834 systemd-journald[1231]: Runtime Journal (/run/log/journal/dff742e9eb9f4d408f8328b885e33d22) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:25:01.717205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 00:25:01.488000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 00:25:01.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.653000 audit: BPF prog-id=14 op=UNLOAD Dec 13 00:25:01.653000 audit: BPF prog-id=13 op=UNLOAD Dec 13 00:25:01.671000 audit: BPF prog-id=15 op=LOAD Dec 13 00:25:01.671000 audit: BPF prog-id=16 op=LOAD Dec 13 00:25:01.671000 audit: BPF prog-id=17 op=LOAD Dec 13 00:25:01.709000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 00:25:01.709000 audit[1231]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe8483e6c0 a2=4000 a3=0 items=0 ppid=1 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:01.709000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 00:25:01.350406 systemd[1]: Queued start job for default target multi-user.target. Dec 13 00:25:01.720325 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 00:25:01.371073 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 00:25:01.371594 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 00:25:01.724067 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:25:01.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.725300 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 00:25:01.727247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 00:25:01.729131 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 00:25:01.731012 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 00:25:01.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.733307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:25:01.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.735637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 00:25:01.735846 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 00:25:01.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.738180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:01.738389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:01.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.740447 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:01.740657 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:01.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.742673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:01.742876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:01.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.745115 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 00:25:01.745315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 00:25:01.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.747432 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:01.747641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:01.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.749677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:25:01.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.751874 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:25:01.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.754900 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 00:25:01.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.757311 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 00:25:01.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.771976 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:25:01.774148 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 00:25:01.777366 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 00:25:01.780126 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 00:25:01.781906 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 00:25:01.781998 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:25:01.784685 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 00:25:01.787478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:01.787675 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:01.792170 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 00:25:01.795166 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 00:25:01.795843 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:01.799162 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 00:25:01.800930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:01.801919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:25:01.806038 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 00:25:01.811178 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 00:25:01.814188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:25:01.816238 systemd-journald[1231]: Time spent on flushing to /var/log/journal/dff742e9eb9f4d408f8328b885e33d22 is 18.429ms for 1159 entries. Dec 13 00:25:01.816238 systemd-journald[1231]: System Journal (/var/log/journal/dff742e9eb9f4d408f8328b885e33d22) is 8M, max 163.5M, 155.5M free. Dec 13 00:25:01.851333 systemd-journald[1231]: Received client request to flush runtime journal. Dec 13 00:25:01.851383 kernel: loop1: detected capacity change from 0 to 219144 Dec 13 00:25:01.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.817398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 00:25:01.819806 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 00:25:01.830208 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 00:25:01.833188 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:25:01.837284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 00:25:01.843173 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 00:25:01.856457 kernel: loop2: detected capacity change from 0 to 171112 Dec 13 00:25:01.857742 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 00:25:01.860199 kernel: loop2: p1 p2 p3 Dec 13 00:25:01.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.865429 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 00:25:01.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.868000 audit: BPF prog-id=18 op=LOAD Dec 13 00:25:01.869000 audit: BPF prog-id=19 op=LOAD Dec 13 00:25:01.869000 audit: BPF prog-id=20 op=LOAD Dec 13 00:25:01.870072 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 00:25:01.872000 audit: BPF prog-id=21 op=LOAD Dec 13 00:25:01.874073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:25:01.879157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:25:01.890000 audit: BPF prog-id=22 op=LOAD Dec 13 00:25:01.890000 audit: BPF prog-id=23 op=LOAD Dec 13 00:25:01.890000 audit: BPF prog-id=24 op=LOAD Dec 13 00:25:01.891191 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 00:25:01.893000 audit: BPF prog-id=25 op=LOAD Dec 13 00:25:01.895000 audit: BPF prog-id=26 op=LOAD Dec 13 00:25:01.895000 audit: BPF prog-id=27 op=LOAD Dec 13 00:25:01.898234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 00:25:01.908118 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 00:25:01.908343 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 00:25:01.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.915572 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 13 00:25:01.915593 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 13 00:25:01.921706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:25:01.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.931062 kernel: loop3: detected capacity change from 0 to 375256 Dec 13 00:25:01.934075 kernel: loop3: p1 p2 p3 Dec 13 00:25:01.945303 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 00:25:01.948076 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 00:25:01.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.957806 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 00:25:01.959346 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 00:25:01.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:01.968163 kernel: loop4: detected capacity change from 0 to 219144 Dec 13 00:25:01.976097 kernel: loop5: detected capacity change from 0 to 171112 Dec 13 00:25:01.978070 kernel: loop5: p1 p2 p3 Dec 13 00:25:01.997066 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:01.997102 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:01.997129 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:01.997145 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:01.998021 (sd-merge)[1303]: device-mapper: reload ioctl on af67e6a29067aeda0590a0009488436dd8f718bac6be743160aad6f147c2927f-verity (253:1) failed: Invalid argument Dec 13 00:25:02.007068 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:02.034188 systemd-oomd[1280]: No swap; memory pressure usage will be degraded Dec 13 00:25:02.034921 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 00:25:02.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.041150 systemd-resolved[1281]: Positive Trust Anchors: Dec 13 00:25:02.041165 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:25:02.041170 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:25:02.041201 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:25:02.044827 systemd-resolved[1281]: Defaulting to hostname 'linux'. Dec 13 00:25:02.046202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:25:02.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.048059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:25:02.373203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 00:25:02.418469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 00:25:02.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.421578 kernel: kauditd_printk_skb: 110 callbacks suppressed Dec 13 00:25:02.421625 kernel: audit: type=1130 audit(1765585502.420:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.422281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:25:02.420000 audit: BPF prog-id=8 op=UNLOAD Dec 13 00:25:02.420000 audit: BPF prog-id=7 op=UNLOAD Dec 13 00:25:02.421000 audit: BPF prog-id=28 op=LOAD Dec 13 00:25:02.426789 kernel: audit: type=1334 audit(1765585502.420:148): prog-id=8 op=UNLOAD Dec 13 00:25:02.426807 kernel: audit: type=1334 audit(1765585502.420:149): prog-id=7 op=UNLOAD Dec 13 00:25:02.426826 kernel: audit: type=1334 audit(1765585502.421:150): prog-id=28 op=LOAD Dec 13 00:25:02.432340 kernel: audit: type=1334 audit(1765585502.421:151): prog-id=29 op=LOAD Dec 13 00:25:02.421000 audit: BPF prog-id=29 op=LOAD Dec 13 00:25:02.475279 systemd-udevd[1310]: Using default interface naming scheme 'v257'. Dec 13 00:25:02.494715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:25:02.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.500501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:25:02.504035 kernel: audit: type=1130 audit(1765585502.496:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.504132 kernel: audit: type=1334 audit(1765585502.497:153): prog-id=30 op=LOAD Dec 13 00:25:02.497000 audit: BPF prog-id=30 op=LOAD Dec 13 00:25:02.563496 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 00:25:02.569841 systemd-networkd[1316]: lo: Link UP Dec 13 00:25:02.570112 systemd-networkd[1316]: lo: Gained carrier Dec 13 00:25:02.572834 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:25:02.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.580076 kernel: audit: type=1130 audit(1765585502.574:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.581418 systemd[1]: Reached target network.target - Network. Dec 13 00:25:02.584498 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 00:25:02.587501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 00:25:02.599093 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 00:25:02.603308 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:02.603425 systemd-networkd[1316]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:25:02.604068 systemd-networkd[1316]: eth0: Link UP Dec 13 00:25:02.604326 systemd-networkd[1316]: eth0: Gained carrier Dec 13 00:25:02.604400 systemd-networkd[1316]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:25:02.612157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:25:02.616092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 00:25:02.616988 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 00:25:02.618946 systemd-networkd[1316]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:25:02.621904 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 00:25:02.623085 kernel: ACPI: button: Power Button [PWRF] Dec 13 00:25:02.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.629129 kernel: audit: type=1130 audit(1765585502.624:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.647468 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 00:25:02.650275 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 00:25:02.650588 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 00:25:02.650808 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 00:25:02.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.662107 kernel: audit: type=1130 audit(1765585502.656:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.743174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:02.749916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:25:02.750185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:02.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.759773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:25:02.801936 kernel: kvm_amd: TSC scaling supported Dec 13 00:25:02.801993 kernel: kvm_amd: Nested Virtualization enabled Dec 13 00:25:02.802010 kernel: kvm_amd: Nested Paging enabled Dec 13 00:25:02.804061 kernel: kvm_amd: LBR virtualization supported Dec 13 00:25:02.804091 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 00:25:02.807072 kernel: kvm_amd: Virtual GIF supported Dec 13 00:25:02.824071 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 00:25:02.826102 kernel: loop6: detected capacity change from 0 to 375256 Dec 13 00:25:02.831017 kernel: loop6: p1 p2 p3 Dec 13 00:25:02.830253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:25:02.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:02.846204 kernel: EDAC MC: Ver: 3.0.0 Dec 13 00:25:02.852273 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:02.852355 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:25:02.856721 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:25:02.856741 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:25:02.856744 (sd-merge)[1303]: device-mapper: reload ioctl on c81b0b335c4f741d8803812340292f37f57a6bdf618683fbcdb11178b8725544-verity (253:2) failed: Invalid argument Dec 13 00:25:02.859067 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:25:02.891068 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 00:25:02.892447 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 00:25:02.896182 (sd-merge)[1303]: Merged extensions into '/usr'. Dec 13 00:25:02.900217 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 00:25:02.900235 systemd[1]: Reloading... Dec 13 00:25:02.950094 zram_generator::config[1411]: No configuration found. Dec 13 00:25:03.194123 systemd[1]: Reloading finished in 293 ms. Dec 13 00:25:03.226385 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 00:25:03.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.245666 systemd[1]: Starting ensure-sysext.service... Dec 13 00:25:03.248164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:25:03.250000 audit: BPF prog-id=31 op=LOAD Dec 13 00:25:03.250000 audit: BPF prog-id=22 op=UNLOAD Dec 13 00:25:03.251000 audit: BPF prog-id=32 op=LOAD Dec 13 00:25:03.251000 audit: BPF prog-id=33 op=LOAD Dec 13 00:25:03.251000 audit: BPF prog-id=23 op=UNLOAD Dec 13 00:25:03.251000 audit: BPF prog-id=24 op=UNLOAD Dec 13 00:25:03.251000 audit: BPF prog-id=34 op=LOAD Dec 13 00:25:03.251000 audit: BPF prog-id=35 op=LOAD Dec 13 00:25:03.251000 audit: BPF prog-id=28 op=UNLOAD Dec 13 00:25:03.251000 audit: BPF prog-id=29 op=UNLOAD Dec 13 00:25:03.252000 audit: BPF prog-id=36 op=LOAD Dec 13 00:25:03.252000 audit: BPF prog-id=21 op=UNLOAD Dec 13 00:25:03.253000 audit: BPF prog-id=37 op=LOAD Dec 13 00:25:03.253000 audit: BPF prog-id=25 op=UNLOAD Dec 13 00:25:03.253000 audit: BPF prog-id=38 op=LOAD Dec 13 00:25:03.253000 audit: BPF prog-id=39 op=LOAD Dec 13 00:25:03.253000 audit: BPF prog-id=26 op=UNLOAD Dec 13 00:25:03.253000 audit: BPF prog-id=27 op=UNLOAD Dec 13 00:25:03.254000 audit: BPF prog-id=40 op=LOAD Dec 13 00:25:03.254000 audit: BPF prog-id=15 op=UNLOAD Dec 13 00:25:03.254000 audit: BPF prog-id=41 op=LOAD Dec 13 00:25:03.254000 audit: BPF prog-id=42 op=LOAD Dec 13 00:25:03.254000 audit: BPF prog-id=16 op=UNLOAD Dec 13 00:25:03.254000 audit: BPF prog-id=17 op=UNLOAD Dec 13 00:25:03.256000 audit: BPF prog-id=43 op=LOAD Dec 13 00:25:03.256000 audit: BPF prog-id=18 op=UNLOAD Dec 13 00:25:03.256000 audit: BPF prog-id=44 op=LOAD Dec 13 00:25:03.256000 audit: BPF prog-id=45 op=LOAD Dec 13 00:25:03.256000 audit: BPF prog-id=19 op=UNLOAD Dec 13 00:25:03.256000 audit: BPF prog-id=20 op=UNLOAD Dec 13 00:25:03.258000 audit: BPF prog-id=46 op=LOAD Dec 13 00:25:03.258000 audit: BPF prog-id=30 op=UNLOAD Dec 13 00:25:03.263343 systemd[1]: Reload requested from client PID 1447 ('systemctl') (unit ensure-sysext.service)... Dec 13 00:25:03.263358 systemd[1]: Reloading... Dec 13 00:25:03.266503 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 00:25:03.266542 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 00:25:03.266854 systemd-tmpfiles[1448]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 00:25:03.268270 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Dec 13 00:25:03.268345 systemd-tmpfiles[1448]: ACLs are not supported, ignoring. Dec 13 00:25:03.274484 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:03.274496 systemd-tmpfiles[1448]: Skipping /boot Dec 13 00:25:03.286181 systemd-tmpfiles[1448]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:25:03.286198 systemd-tmpfiles[1448]: Skipping /boot Dec 13 00:25:03.316125 zram_generator::config[1485]: No configuration found. Dec 13 00:25:03.546412 systemd[1]: Reloading finished in 282 ms. Dec 13 00:25:03.572000 audit: BPF prog-id=47 op=LOAD Dec 13 00:25:03.572000 audit: BPF prog-id=48 op=LOAD Dec 13 00:25:03.572000 audit: BPF prog-id=34 op=UNLOAD Dec 13 00:25:03.572000 audit: BPF prog-id=35 op=UNLOAD Dec 13 00:25:03.573000 audit: BPF prog-id=49 op=LOAD Dec 13 00:25:03.573000 audit: BPF prog-id=43 op=UNLOAD Dec 13 00:25:03.573000 audit: BPF prog-id=50 op=LOAD Dec 13 00:25:03.573000 audit: BPF prog-id=51 op=LOAD Dec 13 00:25:03.573000 audit: BPF prog-id=44 op=UNLOAD Dec 13 00:25:03.573000 audit: BPF prog-id=45 op=UNLOAD Dec 13 00:25:03.574000 audit: BPF prog-id=52 op=LOAD Dec 13 00:25:03.574000 audit: BPF prog-id=36 op=UNLOAD Dec 13 00:25:03.576000 audit: BPF prog-id=53 op=LOAD Dec 13 00:25:03.576000 audit: BPF prog-id=40 op=UNLOAD Dec 13 00:25:03.576000 audit: BPF prog-id=54 op=LOAD Dec 13 00:25:03.576000 audit: BPF prog-id=55 op=LOAD Dec 13 00:25:03.576000 audit: BPF prog-id=41 op=UNLOAD Dec 13 00:25:03.576000 audit: BPF prog-id=42 op=UNLOAD Dec 13 00:25:03.592000 audit: BPF prog-id=56 op=LOAD Dec 13 00:25:03.592000 audit: BPF prog-id=46 op=UNLOAD Dec 13 00:25:03.593000 audit: BPF prog-id=57 op=LOAD Dec 13 00:25:03.593000 audit: BPF prog-id=37 op=UNLOAD Dec 13 00:25:03.593000 audit: BPF prog-id=58 op=LOAD Dec 13 00:25:03.593000 audit: BPF prog-id=59 op=LOAD Dec 13 00:25:03.593000 audit: BPF prog-id=38 op=UNLOAD Dec 13 00:25:03.593000 audit: BPF prog-id=39 op=UNLOAD Dec 13 00:25:03.594000 audit: BPF prog-id=60 op=LOAD Dec 13 00:25:03.594000 audit: BPF prog-id=31 op=UNLOAD Dec 13 00:25:03.594000 audit: BPF prog-id=61 op=LOAD Dec 13 00:25:03.594000 audit: BPF prog-id=62 op=LOAD Dec 13 00:25:03.594000 audit: BPF prog-id=32 op=UNLOAD Dec 13 00:25:03.594000 audit: BPF prog-id=33 op=UNLOAD Dec 13 00:25:03.597811 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:25:03.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.610364 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:03.613745 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 00:25:03.617408 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 00:25:03.633158 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 00:25:03.638907 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 00:25:03.645441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.645652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:03.650312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:03.653844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:03.656000 audit[1525]: SYSTEM_BOOT pid=1525 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.658480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:03.663197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:03.663419 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:03.663530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:03.663680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.665502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:03.665864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:03.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.669713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:03.670968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:03.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.674404 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:03.674735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:03.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:03.692828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.694347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:03.695895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:03.701256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:03.707040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:03.712012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:03.712215 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:03.712312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:03.712404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.712000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:25:03.712000 audit[1553]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1d917f30 a2=420 a3=0 items=0 ppid=1520 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:03.712000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:03.713001 augenrules[1553]: No rules Dec 13 00:25:03.714665 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:03.715151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:03.717457 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 00:25:03.720223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 00:25:03.722772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:03.723030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:03.726067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:03.726331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:03.729067 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:03.729329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:03.736889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 00:25:03.748747 systemd[1]: Finished ensure-sysext.service. Dec 13 00:25:03.752662 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.754027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:03.756015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:25:03.757353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:25:03.772679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:25:03.775781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:25:03.780168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:25:03.782280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:25:03.782394 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:25:03.782441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:25:03.784340 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 00:25:03.786249 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 00:25:03.786282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:25:03.787031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:25:03.787350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:25:03.790178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:25:03.790509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:25:03.793195 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:25:03.796228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:25:03.799520 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:25:03.799787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:25:03.803021 augenrules[1566]: /sbin/augenrules: No change Dec 13 00:25:03.804420 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:25:03.804486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:25:03.809000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:25:03.809000 audit[1592]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4ee82cd0 a2=420 a3=0 items=0 ppid=1566 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:03.809000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:03.809000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:25:03.809000 audit[1592]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4ee85160 a2=420 a3=0 items=0 ppid=1566 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:03.809000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:03.810153 augenrules[1592]: No rules Dec 13 00:25:03.811857 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:03.815161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:03.861320 systemd-networkd[1316]: eth0: Gained IPv6LL Dec 13 00:25:03.865059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 00:25:03.867377 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 00:25:03.882981 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 00:25:04.664651 systemd-resolved[1281]: Clock change detected. Flushing caches. Dec 13 00:25:04.664738 systemd-timesyncd[1572]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 00:25:04.664763 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 00:25:04.664792 systemd-timesyncd[1572]: Initial clock synchronization to Sat 2025-12-13 00:25:04.664594 UTC. Dec 13 00:25:04.861423 ldconfig[1522]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 00:25:04.867729 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 00:25:04.871219 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 00:25:04.904474 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 00:25:04.906576 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:25:04.908414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 00:25:04.910426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 00:25:04.912485 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 13 00:25:04.914510 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 00:25:04.916338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 00:25:04.918395 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 00:25:04.920576 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 00:25:04.922376 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 00:25:04.924370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 00:25:04.924401 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:25:04.925817 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:25:04.928191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 00:25:04.931676 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 00:25:04.935290 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 00:25:04.937421 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 00:25:04.939413 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 00:25:04.947297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 00:25:04.949346 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 00:25:04.951984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 00:25:04.954571 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:25:04.956108 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:25:04.957630 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:04.957657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:25:04.958715 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 00:25:04.961321 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 00:25:04.963879 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 00:25:04.973751 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 00:25:04.976711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 00:25:04.979677 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 00:25:04.981333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 00:25:04.982455 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 13 00:25:04.986587 jq[1612]: false Dec 13 00:25:04.987322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:04.990470 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 00:25:04.993503 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 00:25:04.999244 extend-filesystems[1613]: Found /dev/vda6 Dec 13 00:25:04.997450 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 00:25:04.999457 oslogin_cache_refresh[1614]: Refreshing passwd entry cache Dec 13 00:25:05.000912 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Refreshing passwd entry cache Dec 13 00:25:05.001296 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 00:25:05.003618 extend-filesystems[1613]: Found /dev/vda9 Dec 13 00:25:05.005293 extend-filesystems[1613]: Checking size of /dev/vda9 Dec 13 00:25:05.007039 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Failure getting users, quitting Dec 13 00:25:05.007031 oslogin_cache_refresh[1614]: Failure getting users, quitting Dec 13 00:25:05.007291 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:05.007291 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Refreshing group entry cache Dec 13 00:25:05.007055 oslogin_cache_refresh[1614]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:25:05.007106 oslogin_cache_refresh[1614]: Refreshing group entry cache Dec 13 00:25:05.012397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 00:25:05.014171 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Failure getting groups, quitting Dec 13 00:25:05.014171 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:05.014164 oslogin_cache_refresh[1614]: Failure getting groups, quitting Dec 13 00:25:05.014176 oslogin_cache_refresh[1614]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:25:05.019026 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 00:25:05.020677 extend-filesystems[1613]: Resized partition /dev/vda9 Dec 13 00:25:05.021360 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 00:25:05.024078 extend-filesystems[1633]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 00:25:05.024630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 00:25:05.028367 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 00:25:05.031748 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 00:25:05.037254 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 00:25:05.042475 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 00:25:05.045436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 00:25:05.045746 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 00:25:05.046125 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 13 00:25:05.046389 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 13 00:25:05.052777 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 00:25:05.054289 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 00:25:05.061832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 00:25:05.065415 update_engine[1636]: I20251213 00:25:05.065055 1636 main.cc:92] Flatcar Update Engine starting Dec 13 00:25:05.065662 jq[1643]: true Dec 13 00:25:05.063294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 00:25:05.072753 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 00:25:05.074715 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 00:25:05.095763 extend-filesystems[1633]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 00:25:05.095763 extend-filesystems[1633]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 00:25:05.095763 extend-filesystems[1633]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 00:25:05.104897 extend-filesystems[1613]: Resized filesystem in /dev/vda9 Dec 13 00:25:05.096695 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 00:25:05.098319 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 00:25:05.111055 jq[1660]: true Dec 13 00:25:05.102750 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 00:25:05.103044 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 00:25:05.134433 tar[1656]: linux-amd64/LICENSE Dec 13 00:25:05.134718 tar[1656]: linux-amd64/helm Dec 13 00:25:05.148632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 00:25:05.162536 bash[1695]: Updated "/home/core/.ssh/authorized_keys" Dec 13 00:25:05.162129 systemd-logind[1630]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 00:25:05.162154 systemd-logind[1630]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 00:25:05.164426 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 00:25:05.164537 systemd-logind[1630]: New seat seat0. Dec 13 00:25:05.171022 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 00:25:05.177874 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 00:25:05.184871 dbus-daemon[1610]: [system] SELinux support is enabled Dec 13 00:25:05.185075 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 00:25:05.191548 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 00:25:05.191575 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 00:25:05.194868 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 00:25:05.194892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 00:25:05.198789 update_engine[1636]: I20251213 00:25:05.198750 1636 update_check_scheduler.cc:74] Next update check in 2m41s Dec 13 00:25:05.199910 systemd[1]: Started update-engine.service - Update Engine. Dec 13 00:25:05.203070 dbus-daemon[1610]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 00:25:05.203508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 00:25:05.275311 sshd_keygen[1645]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 00:25:05.291594 locksmithd[1697]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 00:25:05.320970 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 00:25:05.324618 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 00:25:05.351613 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 00:25:05.351937 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 00:25:05.357894 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 00:25:05.383435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 00:25:05.390066 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 00:25:05.393931 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 00:25:05.395888 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 00:25:05.403106 containerd[1658]: time="2025-12-13T00:25:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 00:25:05.403869 containerd[1658]: time="2025-12-13T00:25:05.403696608Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417562292Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.912µs" Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417613548Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417665645Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417678900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417902830Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417920824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.417991787Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418133 containerd[1658]: time="2025-12-13T00:25:05.418004761Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418391 containerd[1658]: time="2025-12-13T00:25:05.418350049Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418391 containerd[1658]: time="2025-12-13T00:25:05.418374365Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418391 containerd[1658]: time="2025-12-13T00:25:05.418386778Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418456 containerd[1658]: time="2025-12-13T00:25:05.418397708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418687 containerd[1658]: time="2025-12-13T00:25:05.418652727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 00:25:05.418827 containerd[1658]: time="2025-12-13T00:25:05.418790445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.419068 containerd[1658]: time="2025-12-13T00:25:05.419034923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.419092 containerd[1658]: time="2025-12-13T00:25:05.419081300Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:25:05.419120 containerd[1658]: time="2025-12-13T00:25:05.419093994Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 00:25:05.419161 containerd[1658]: time="2025-12-13T00:25:05.419140591Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 00:25:05.422567 containerd[1658]: time="2025-12-13T00:25:05.421901730Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 00:25:05.422567 containerd[1658]: time="2025-12-13T00:25:05.422082939Z" level=info msg="metadata content store policy set" policy=shared Dec 13 00:25:05.487812 tar[1656]: linux-amd64/README.md Dec 13 00:25:05.511834 containerd[1658]: time="2025-12-13T00:25:05.511754646Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 00:25:05.511925 containerd[1658]: time="2025-12-13T00:25:05.511862879Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:05.512012 containerd[1658]: time="2025-12-13T00:25:05.511979217Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:25:05.512012 containerd[1658]: time="2025-12-13T00:25:05.512007210Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 00:25:05.512069 containerd[1658]: time="2025-12-13T00:25:05.512026225Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 00:25:05.512069 containerd[1658]: time="2025-12-13T00:25:05.512043007Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 00:25:05.512069 containerd[1658]: time="2025-12-13T00:25:05.512061331Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 00:25:05.512140 containerd[1658]: time="2025-12-13T00:25:05.512074746Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 00:25:05.512140 containerd[1658]: time="2025-12-13T00:25:05.512091367Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 00:25:05.512140 containerd[1658]: time="2025-12-13T00:25:05.512109582Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 00:25:05.512140 containerd[1658]: time="2025-12-13T00:25:05.512125511Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 00:25:05.512140 containerd[1658]: time="2025-12-13T00:25:05.512138967Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 00:25:05.512280 containerd[1658]: time="2025-12-13T00:25:05.512154646Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 00:25:05.512280 containerd[1658]: time="2025-12-13T00:25:05.512170846Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 00:25:05.512402 containerd[1658]: time="2025-12-13T00:25:05.512377223Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 00:25:05.512431 containerd[1658]: time="2025-12-13T00:25:05.512405567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 00:25:05.512452 containerd[1658]: time="2025-12-13T00:25:05.512429942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 00:25:05.512452 containerd[1658]: time="2025-12-13T00:25:05.512448327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 00:25:05.512504 containerd[1658]: time="2025-12-13T00:25:05.512461311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 00:25:05.512504 containerd[1658]: time="2025-12-13T00:25:05.512473133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 00:25:05.512504 containerd[1658]: time="2025-12-13T00:25:05.512494293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 00:25:05.512615 containerd[1658]: time="2025-12-13T00:25:05.512517697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 00:25:05.512615 containerd[1658]: time="2025-12-13T00:25:05.512532124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 00:25:05.512615 containerd[1658]: time="2025-12-13T00:25:05.512544417Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 00:25:05.512615 containerd[1658]: time="2025-12-13T00:25:05.512556139Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 00:25:05.512957 containerd[1658]: time="2025-12-13T00:25:05.512912738Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 00:25:05.513088 containerd[1658]: time="2025-12-13T00:25:05.513065494Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 00:25:05.513088 containerd[1658]: time="2025-12-13T00:25:05.513082446Z" level=info msg="Start snapshots syncer" Dec 13 00:25:05.514983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 00:25:05.515348 containerd[1658]: time="2025-12-13T00:25:05.515301387Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 00:25:05.515683 containerd[1658]: time="2025-12-13T00:25:05.515625034Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 00:25:05.515870 containerd[1658]: time="2025-12-13T00:25:05.515686610Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 00:25:05.517476 containerd[1658]: time="2025-12-13T00:25:05.517441902Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 00:25:05.517701 containerd[1658]: time="2025-12-13T00:25:05.517677573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 00:25:05.517745 containerd[1658]: time="2025-12-13T00:25:05.517706107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 00:25:05.517745 containerd[1658]: time="2025-12-13T00:25:05.517730443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 00:25:05.517745 containerd[1658]: time="2025-12-13T00:25:05.517741513Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 00:25:05.517843 containerd[1658]: time="2025-12-13T00:25:05.517756972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 00:25:05.517843 containerd[1658]: time="2025-12-13T00:25:05.517770448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 00:25:05.517843 containerd[1658]: time="2025-12-13T00:25:05.517783823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 00:25:05.517843 containerd[1658]: time="2025-12-13T00:25:05.517798079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 00:25:05.517843 containerd[1658]: time="2025-12-13T00:25:05.517811585Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 00:25:05.518282 containerd[1658]: time="2025-12-13T00:25:05.518226864Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:05.518282 containerd[1658]: time="2025-12-13T00:25:05.518274072Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:25:05.518282 containerd[1658]: time="2025-12-13T00:25:05.518285544Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518295623Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518303908Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518319557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518340837Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518365443Z" level=info msg="runtime interface created" Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518372316Z" level=info msg="created NRI interface" Dec 13 00:25:05.518396 containerd[1658]: time="2025-12-13T00:25:05.518385201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 00:25:05.518582 containerd[1658]: time="2025-12-13T00:25:05.518407472Z" level=info msg="Connect containerd service" Dec 13 00:25:05.518582 containerd[1658]: time="2025-12-13T00:25:05.518436687Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 00:25:05.519910 containerd[1658]: time="2025-12-13T00:25:05.519859946Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:25:05.654872 containerd[1658]: time="2025-12-13T00:25:05.654748270Z" level=info msg="Start subscribing containerd event" Dec 13 00:25:05.654872 containerd[1658]: time="2025-12-13T00:25:05.654816207Z" level=info msg="Start recovering state" Dec 13 00:25:05.655381 containerd[1658]: time="2025-12-13T00:25:05.655345350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 00:25:05.655433 containerd[1658]: time="2025-12-13T00:25:05.655385595Z" level=info msg="Start event monitor" Dec 13 00:25:05.655433 containerd[1658]: time="2025-12-13T00:25:05.655402717Z" level=info msg="Start cni network conf syncer for default" Dec 13 00:25:05.655433 containerd[1658]: time="2025-12-13T00:25:05.655411534Z" level=info msg="Start streaming server" Dec 13 00:25:05.655433 containerd[1658]: time="2025-12-13T00:25:05.655420350Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 00:25:05.655433 containerd[1658]: time="2025-12-13T00:25:05.655428806Z" level=info msg="runtime interface starting up..." Dec 13 00:25:05.655568 containerd[1658]: time="2025-12-13T00:25:05.655436180Z" level=info msg="starting plugins..." Dec 13 00:25:05.655568 containerd[1658]: time="2025-12-13T00:25:05.655450637Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 00:25:05.655568 containerd[1658]: time="2025-12-13T00:25:05.655533613Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 00:25:05.655683 containerd[1658]: time="2025-12-13T00:25:05.655626647Z" level=info msg="containerd successfully booted in 0.253193s" Dec 13 00:25:05.655856 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 00:25:06.096196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:06.099253 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 00:25:06.101516 systemd[1]: Startup finished in 2.863s (kernel) + 5.808s (initrd) + 4.819s (userspace) = 13.491s. Dec 13 00:25:06.110632 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:06.490898 kubelet[1748]: E1213 00:25:06.490837 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:06.494591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:06.494798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:06.495192 systemd[1]: kubelet.service: Consumed 955ms CPU time, 256.8M memory peak. Dec 13 00:25:07.497737 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 00:25:07.498953 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:37122.service - OpenSSH per-connection server daemon (10.0.0.1:37122). Dec 13 00:25:07.599823 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 37122 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:07.602334 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:07.610025 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 00:25:07.611227 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 00:25:07.616019 systemd-logind[1630]: New session 1 of user core. Dec 13 00:25:07.634855 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 00:25:07.638331 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 00:25:07.662271 (systemd)[1768]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:07.665523 systemd-logind[1630]: New session 2 of user core. Dec 13 00:25:07.860206 systemd[1768]: Queued start job for default target default.target. Dec 13 00:25:07.884746 systemd[1768]: Created slice app.slice - User Application Slice. Dec 13 00:25:07.884779 systemd[1768]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 00:25:07.884796 systemd[1768]: Reached target paths.target - Paths. Dec 13 00:25:07.884855 systemd[1768]: Reached target timers.target - Timers. Dec 13 00:25:07.886576 systemd[1768]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 00:25:07.887725 systemd[1768]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 00:25:07.898988 systemd[1768]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 00:25:07.899094 systemd[1768]: Reached target sockets.target - Sockets. Dec 13 00:25:07.904027 systemd[1768]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 00:25:07.904157 systemd[1768]: Reached target basic.target - Basic System. Dec 13 00:25:07.904225 systemd[1768]: Reached target default.target - Main User Target. Dec 13 00:25:07.904292 systemd[1768]: Startup finished in 232ms. Dec 13 00:25:07.904505 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 00:25:07.911449 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 00:25:07.931331 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:37138.service - OpenSSH per-connection server daemon (10.0.0.1:37138). Dec 13 00:25:08.002349 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 37138 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.004010 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.009191 systemd-logind[1630]: New session 3 of user core. Dec 13 00:25:08.015370 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 00:25:08.029508 sshd[1786]: Connection closed by 10.0.0.1 port 37138 Dec 13 00:25:08.029816 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:08.042060 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:37138.service: Deactivated successfully. Dec 13 00:25:08.044316 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 00:25:08.045192 systemd-logind[1630]: Session 3 logged out. Waiting for processes to exit. Dec 13 00:25:08.048783 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Dec 13 00:25:08.049696 systemd-logind[1630]: Removed session 3. Dec 13 00:25:08.112490 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.114064 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.119332 systemd-logind[1630]: New session 4 of user core. Dec 13 00:25:08.128390 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 00:25:08.137890 sshd[1797]: Connection closed by 10.0.0.1 port 37152 Dec 13 00:25:08.138266 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:08.148624 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:37152.service: Deactivated successfully. Dec 13 00:25:08.150933 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 00:25:08.151916 systemd-logind[1630]: Session 4 logged out. Waiting for processes to exit. Dec 13 00:25:08.155502 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:37156.service - OpenSSH per-connection server daemon (10.0.0.1:37156). Dec 13 00:25:08.156264 systemd-logind[1630]: Removed session 4. Dec 13 00:25:08.216591 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 37156 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.218725 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.224820 systemd-logind[1630]: New session 5 of user core. Dec 13 00:25:08.238408 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 00:25:08.254259 sshd[1808]: Connection closed by 10.0.0.1 port 37156 Dec 13 00:25:08.254751 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:08.265772 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:37156.service: Deactivated successfully. Dec 13 00:25:08.268041 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 00:25:08.269049 systemd-logind[1630]: Session 5 logged out. Waiting for processes to exit. Dec 13 00:25:08.272366 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:37164.service - OpenSSH per-connection server daemon (10.0.0.1:37164). Dec 13 00:25:08.273297 systemd-logind[1630]: Removed session 5. Dec 13 00:25:08.333698 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 37164 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.335428 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.340667 systemd-logind[1630]: New session 6 of user core. Dec 13 00:25:08.350377 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 00:25:08.373306 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 00:25:08.373762 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:08.397632 sudo[1819]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:08.399165 sshd[1818]: Connection closed by 10.0.0.1 port 37164 Dec 13 00:25:08.399496 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:08.412658 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:37164.service: Deactivated successfully. Dec 13 00:25:08.415076 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 00:25:08.416016 systemd-logind[1630]: Session 6 logged out. Waiting for processes to exit. Dec 13 00:25:08.418920 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:37180.service - OpenSSH per-connection server daemon (10.0.0.1:37180). Dec 13 00:25:08.419488 systemd-logind[1630]: Removed session 6. Dec 13 00:25:08.486275 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 37180 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.488581 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.493341 systemd-logind[1630]: New session 7 of user core. Dec 13 00:25:08.503398 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 00:25:08.519359 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 00:25:08.519715 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:08.681861 sudo[1832]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:08.691189 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 00:25:08.691628 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:08.701133 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:25:08.753000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:25:08.754763 augenrules[1856]: No rules Dec 13 00:25:08.755792 kernel: kauditd_printk_skb: 85 callbacks suppressed Dec 13 00:25:08.755830 kernel: audit: type=1305 audit(1765585508.753:236): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:25:08.756555 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:25:08.756906 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:25:08.758225 sudo[1831]: pam_unix(sudo:session): session closed for user root Dec 13 00:25:08.753000 audit[1856]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd3ef8220 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:08.759765 sshd[1830]: Connection closed by 10.0.0.1 port 37180 Dec 13 00:25:08.760114 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Dec 13 00:25:08.765207 kernel: audit: type=1300 audit(1765585508.753:236): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd3ef8220 a2=420 a3=0 items=0 ppid=1837 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:08.765270 kernel: audit: type=1327 audit(1765585508.753:236): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:08.753000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:25:08.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.772108 kernel: audit: type=1130 audit(1765585508.756:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.772138 kernel: audit: type=1131 audit(1765585508.756:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.757000 audit[1831]: USER_END pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.780858 kernel: audit: type=1106 audit(1765585508.757:239): pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.780885 kernel: audit: type=1104 audit(1765585508.757:240): pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.757000 audit[1831]: CRED_DISP pid=1831 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.784657 kernel: audit: type=1106 audit(1765585508.759:241): pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.759000 audit[1826]: USER_END pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.790433 kernel: audit: type=1104 audit(1765585508.760:242): pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.760000 audit[1826]: CRED_DISP pid=1826 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.804144 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:37180.service: Deactivated successfully. Dec 13 00:25:08.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.109:22-10.0.0.1:37180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.805953 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 00:25:08.807177 systemd-logind[1630]: Session 7 logged out. Waiting for processes to exit. Dec 13 00:25:08.809117 systemd-logind[1630]: Removed session 7. Dec 13 00:25:08.809256 kernel: audit: type=1131 audit(1765585508.803:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.109:22-10.0.0.1:37180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.109:22-10.0.0.1:37190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.810629 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:37190.service - OpenSSH per-connection server daemon (10.0.0.1:37190). Dec 13 00:25:08.875000 audit[1865]: USER_ACCT pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.877045 sshd[1865]: Accepted publickey for core from 10.0.0.1 port 37190 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:25:08.877000 audit[1865]: CRED_ACQ pid=1865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.877000 audit[1865]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc21ce030 a2=3 a3=0 items=0 ppid=1 pid=1865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:08.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:25:08.879063 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:25:08.884280 systemd-logind[1630]: New session 8 of user core. Dec 13 00:25:08.898487 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 00:25:08.899000 audit[1865]: USER_START pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.901000 audit[1869]: CRED_ACQ pid=1869 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:25:08.912000 audit[1870]: USER_ACCT pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.913985 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 00:25:08.912000 audit[1870]: CRED_REFR pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.913000 audit[1870]: USER_START pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:25:08.914388 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:25:09.421720 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 00:25:09.447860 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 00:25:09.867578 dockerd[1893]: time="2025-12-13T00:25:09.867437716Z" level=info msg="Starting up" Dec 13 00:25:09.874505 dockerd[1893]: time="2025-12-13T00:25:09.874451309Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 00:25:09.997581 dockerd[1893]: time="2025-12-13T00:25:09.997520688Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 00:25:10.595251 dockerd[1893]: time="2025-12-13T00:25:10.595180454Z" level=info msg="Loading containers: start." Dec 13 00:25:10.613272 kernel: Initializing XFRM netlink socket Dec 13 00:25:10.683000 audit[1946]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.683000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe05d9e030 a2=0 a3=0 items=0 ppid=1893 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.683000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:25:10.685000 audit[1948]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.685000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffd16459c0 a2=0 a3=0 items=0 ppid=1893 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:25:10.688000 audit[1950]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.688000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e5c50d0 a2=0 a3=0 items=0 ppid=1893 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.688000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:25:10.691000 audit[1952]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.691000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9e809a30 a2=0 a3=0 items=0 ppid=1893 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:25:10.694000 audit[1954]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.694000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcafdafc60 a2=0 a3=0 items=0 ppid=1893 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:25:10.697000 audit[1956]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.697000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea3fd7060 a2=0 a3=0 items=0 ppid=1893 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:10.700000 audit[1958]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.700000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff6bb01170 a2=0 a3=0 items=0 ppid=1893 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:10.703000 audit[1960]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.703000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffece5e6af0 a2=0 a3=0 items=0 ppid=1893 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:25:10.749000 audit[1963]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.749000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdd2600db0 a2=0 a3=0 items=0 ppid=1893 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.749000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 00:25:10.752000 audit[1965]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.752000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffff89bb270 a2=0 a3=0 items=0 ppid=1893 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:25:10.754000 audit[1967]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.754000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff6556dc50 a2=0 a3=0 items=0 ppid=1893 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:25:10.757000 audit[1969]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.757000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd2657caf0 a2=0 a3=0 items=0 ppid=1893 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:10.759000 audit[1971]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.759000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd60785220 a2=0 a3=0 items=0 ppid=1893 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:25:10.803000 audit[2001]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.803000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff4a67cc90 a2=0 a3=0 items=0 ppid=1893 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.803000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:25:10.805000 audit[2003]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.805000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd87357c50 a2=0 a3=0 items=0 ppid=1893 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:25:10.808000 audit[2005]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.808000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf85687b0 a2=0 a3=0 items=0 ppid=1893 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:25:10.810000 audit[2007]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.810000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff520c9ce0 a2=0 a3=0 items=0 ppid=1893 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:25:10.812000 audit[2009]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.812000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce3583730 a2=0 a3=0 items=0 ppid=1893 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:25:10.814000 audit[2011]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.814000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe30c5b20 a2=0 a3=0 items=0 ppid=1893 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:10.817000 audit[2013]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.817000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc9326940 a2=0 a3=0 items=0 ppid=1893 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:10.820000 audit[2015]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.820000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe89ad6170 a2=0 a3=0 items=0 ppid=1893 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:25:10.822000 audit[2017]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.822000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc7fa23bb0 a2=0 a3=0 items=0 ppid=1893 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 00:25:10.825000 audit[2019]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.825000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffbe4f4750 a2=0 a3=0 items=0 ppid=1893 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:25:10.827000 audit[2021]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.827000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc67db9410 a2=0 a3=0 items=0 ppid=1893 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:25:10.830000 audit[2023]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.830000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffccb82fdb0 a2=0 a3=0 items=0 ppid=1893 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:25:10.832000 audit[2025]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.832000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffffd4491c0 a2=0 a3=0 items=0 ppid=1893 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:25:10.839000 audit[2030]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.839000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd0a9732f0 a2=0 a3=0 items=0 ppid=1893 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:25:10.842000 audit[2032]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.842000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdbb0a60f0 a2=0 a3=0 items=0 ppid=1893 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:25:10.844000 audit[2034]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:10.844000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcc987c360 a2=0 a3=0 items=0 ppid=1893 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:25:10.847000 audit[2036]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.847000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe32e564a0 a2=0 a3=0 items=0 ppid=1893 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:25:10.850000 audit[2038]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.850000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe958e2b70 a2=0 a3=0 items=0 ppid=1893 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:25:10.852000 audit[2040]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:10.852000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa6f0e5a0 a2=0 a3=0 items=0 ppid=1893 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:10.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:25:11.628000 audit[2045]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.628000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffce8335af0 a2=0 a3=0 items=0 ppid=1893 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 00:25:11.631000 audit[2047]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.631000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd69fa8820 a2=0 a3=0 items=0 ppid=1893 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 00:25:11.642000 audit[2055]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.642000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc8c8bb560 a2=0 a3=0 items=0 ppid=1893 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 00:25:11.654000 audit[2061]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.654000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc90cf6540 a2=0 a3=0 items=0 ppid=1893 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 00:25:11.657000 audit[2063]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.657000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd266689e0 a2=0 a3=0 items=0 ppid=1893 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 00:25:11.659000 audit[2065]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.659000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffecbd30440 a2=0 a3=0 items=0 ppid=1893 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 00:25:11.662000 audit[2067]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.662000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffff5baa590 a2=0 a3=0 items=0 ppid=1893 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:25:11.664000 audit[2069]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:11.664000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee83627b0 a2=0 a3=0 items=0 ppid=1893 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:11.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 00:25:11.666499 systemd-networkd[1316]: docker0: Link UP Dec 13 00:25:12.195128 dockerd[1893]: time="2025-12-13T00:25:12.195048898Z" level=info msg="Loading containers: done." Dec 13 00:25:12.216323 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3691161761-merged.mount: Deactivated successfully. Dec 13 00:25:12.564874 dockerd[1893]: time="2025-12-13T00:25:12.564743107Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 00:25:12.564874 dockerd[1893]: time="2025-12-13T00:25:12.564837073Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 00:25:12.565090 dockerd[1893]: time="2025-12-13T00:25:12.564938433Z" level=info msg="Initializing buildkit" Dec 13 00:25:12.970836 dockerd[1893]: time="2025-12-13T00:25:12.970709579Z" level=info msg="Completed buildkit initialization" Dec 13 00:25:12.979445 dockerd[1893]: time="2025-12-13T00:25:12.979401559Z" level=info msg="Daemon has completed initialization" Dec 13 00:25:12.980001 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 00:25:12.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:12.981478 dockerd[1893]: time="2025-12-13T00:25:12.979736898Z" level=info msg="API listen on /run/docker.sock" Dec 13 00:25:13.522767 containerd[1658]: time="2025-12-13T00:25:13.522713543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 13 00:25:15.024501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905967428.mount: Deactivated successfully. Dec 13 00:25:15.937600 containerd[1658]: time="2025-12-13T00:25:15.937529635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:15.938564 containerd[1658]: time="2025-12-13T00:25:15.938518821Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25400692" Dec 13 00:25:15.939902 containerd[1658]: time="2025-12-13T00:25:15.939873050Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:15.942541 containerd[1658]: time="2025-12-13T00:25:15.942497432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:15.943296 containerd[1658]: time="2025-12-13T00:25:15.943258659Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.420506974s" Dec 13 00:25:15.943296 containerd[1658]: time="2025-12-13T00:25:15.943294727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 13 00:25:15.943986 containerd[1658]: time="2025-12-13T00:25:15.943958051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 13 00:25:16.735622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 00:25:16.737739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:16.949478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:16.954422 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 13 00:25:16.954515 kernel: audit: type=1130 audit(1765585516.948:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:16.963769 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:17.019180 kubelet[2184]: E1213 00:25:17.019047 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:17.026204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:17.026503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:17.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:17.027110 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.9M memory peak. Dec 13 00:25:17.032249 kernel: audit: type=1131 audit(1765585517.025:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:17.443932 containerd[1658]: time="2025-12-13T00:25:17.443795121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:17.444837 containerd[1658]: time="2025-12-13T00:25:17.444801007Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Dec 13 00:25:17.446013 containerd[1658]: time="2025-12-13T00:25:17.445969899Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:17.448802 containerd[1658]: time="2025-12-13T00:25:17.448763739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:17.449891 containerd[1658]: time="2025-12-13T00:25:17.449863231Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.505866517s" Dec 13 00:25:17.449891 containerd[1658]: time="2025-12-13T00:25:17.449889009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 13 00:25:17.450616 containerd[1658]: time="2025-12-13T00:25:17.450421708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 13 00:25:18.396019 containerd[1658]: time="2025-12-13T00:25:18.395967815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:18.396887 containerd[1658]: time="2025-12-13T00:25:18.396848186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 13 00:25:18.397899 containerd[1658]: time="2025-12-13T00:25:18.397856647Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:18.400168 containerd[1658]: time="2025-12-13T00:25:18.400125271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:18.400943 containerd[1658]: time="2025-12-13T00:25:18.400905565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 950.456355ms" Dec 13 00:25:18.400943 containerd[1658]: time="2025-12-13T00:25:18.400933297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 13 00:25:18.401790 containerd[1658]: time="2025-12-13T00:25:18.401600147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 13 00:25:20.014672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294178778.mount: Deactivated successfully. Dec 13 00:25:20.831014 containerd[1658]: time="2025-12-13T00:25:20.830950426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:20.831954 containerd[1658]: time="2025-12-13T00:25:20.831912390Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 13 00:25:20.833201 containerd[1658]: time="2025-12-13T00:25:20.833161463Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:20.835027 containerd[1658]: time="2025-12-13T00:25:20.834997005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:20.835505 containerd[1658]: time="2025-12-13T00:25:20.835470663Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.433841091s" Dec 13 00:25:20.835544 containerd[1658]: time="2025-12-13T00:25:20.835502102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 13 00:25:20.836165 containerd[1658]: time="2025-12-13T00:25:20.835962966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 13 00:25:21.761044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042089018.mount: Deactivated successfully. Dec 13 00:25:23.094529 containerd[1658]: time="2025-12-13T00:25:23.094471957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.095465 containerd[1658]: time="2025-12-13T00:25:23.095424053Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Dec 13 00:25:23.096946 containerd[1658]: time="2025-12-13T00:25:23.096908005Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.100990 containerd[1658]: time="2025-12-13T00:25:23.100932202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.102006 containerd[1658]: time="2025-12-13T00:25:23.101956193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.265958541s" Dec 13 00:25:23.102006 containerd[1658]: time="2025-12-13T00:25:23.101991208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 13 00:25:23.103102 containerd[1658]: time="2025-12-13T00:25:23.103061876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 13 00:25:23.691569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533832283.mount: Deactivated successfully. Dec 13 00:25:23.698298 containerd[1658]: time="2025-12-13T00:25:23.698209500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.699179 containerd[1658]: time="2025-12-13T00:25:23.699112594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 13 00:25:23.700515 containerd[1658]: time="2025-12-13T00:25:23.700461925Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.703113 containerd[1658]: time="2025-12-13T00:25:23.703057222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:23.703978 containerd[1658]: time="2025-12-13T00:25:23.703935889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 600.825793ms" Dec 13 00:25:23.703978 containerd[1658]: time="2025-12-13T00:25:23.703973280Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 13 00:25:23.704541 containerd[1658]: time="2025-12-13T00:25:23.704506189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 13 00:25:24.547323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051917304.mount: Deactivated successfully. Dec 13 00:25:27.235384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 00:25:27.237059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:27.923051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:27.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:27.928264 kernel: audit: type=1130 audit(1765585527.922:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:27.947750 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:25:28.045774 kubelet[2321]: E1213 00:25:28.045700 2321 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:25:28.050731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:25:28.051033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:25:28.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:28.051864 systemd[1]: kubelet.service: Consumed 306ms CPU time, 109.6M memory peak. Dec 13 00:25:28.056247 kernel: audit: type=1131 audit(1765585528.050:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:28.150380 containerd[1658]: time="2025-12-13T00:25:28.150308846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:28.152464 containerd[1658]: time="2025-12-13T00:25:28.152075218Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Dec 13 00:25:28.153838 containerd[1658]: time="2025-12-13T00:25:28.153780276Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:28.157094 containerd[1658]: time="2025-12-13T00:25:28.157031032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:28.158388 containerd[1658]: time="2025-12-13T00:25:28.158337713Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.453798302s" Dec 13 00:25:28.158388 containerd[1658]: time="2025-12-13T00:25:28.158371116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 13 00:25:31.712569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:31.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:31.712750 systemd[1]: kubelet.service: Consumed 306ms CPU time, 109.6M memory peak. Dec 13 00:25:31.714938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:31.721674 kernel: audit: type=1130 audit(1765585531.711:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:31.721791 kernel: audit: type=1131 audit(1765585531.711:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:31.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:31.745736 systemd[1]: Reload requested from client PID 2361 ('systemctl') (unit session-8.scope)... Dec 13 00:25:31.745758 systemd[1]: Reloading... Dec 13 00:25:31.840270 zram_generator::config[2410]: No configuration found. Dec 13 00:25:32.349576 systemd[1]: Reloading finished in 603 ms. Dec 13 00:25:32.378263 kernel: audit: type=1334 audit(1765585532.373:300): prog-id=67 op=LOAD Dec 13 00:25:32.378360 kernel: audit: type=1334 audit(1765585532.373:301): prog-id=49 op=UNLOAD Dec 13 00:25:32.373000 audit: BPF prog-id=67 op=LOAD Dec 13 00:25:32.373000 audit: BPF prog-id=49 op=UNLOAD Dec 13 00:25:32.374000 audit: BPF prog-id=68 op=LOAD Dec 13 00:25:32.379924 kernel: audit: type=1334 audit(1765585532.374:302): prog-id=68 op=LOAD Dec 13 00:25:32.379983 kernel: audit: type=1334 audit(1765585532.374:303): prog-id=69 op=LOAD Dec 13 00:25:32.374000 audit: BPF prog-id=69 op=LOAD Dec 13 00:25:32.381356 kernel: audit: type=1334 audit(1765585532.374:304): prog-id=50 op=UNLOAD Dec 13 00:25:32.374000 audit: BPF prog-id=50 op=UNLOAD Dec 13 00:25:32.382787 kernel: audit: type=1334 audit(1765585532.374:305): prog-id=51 op=UNLOAD Dec 13 00:25:32.374000 audit: BPF prog-id=51 op=UNLOAD Dec 13 00:25:32.375000 audit: BPF prog-id=70 op=LOAD Dec 13 00:25:32.375000 audit: BPF prog-id=63 op=UNLOAD Dec 13 00:25:32.385000 audit: BPF prog-id=71 op=LOAD Dec 13 00:25:32.385000 audit: BPF prog-id=53 op=UNLOAD Dec 13 00:25:32.385000 audit: BPF prog-id=72 op=LOAD Dec 13 00:25:32.385000 audit: BPF prog-id=73 op=LOAD Dec 13 00:25:32.385000 audit: BPF prog-id=54 op=UNLOAD Dec 13 00:25:32.385000 audit: BPF prog-id=55 op=UNLOAD Dec 13 00:25:32.388000 audit: BPF prog-id=74 op=LOAD Dec 13 00:25:32.388000 audit: BPF prog-id=64 op=UNLOAD Dec 13 00:25:32.388000 audit: BPF prog-id=75 op=LOAD Dec 13 00:25:32.388000 audit: BPF prog-id=76 op=LOAD Dec 13 00:25:32.388000 audit: BPF prog-id=65 op=UNLOAD Dec 13 00:25:32.388000 audit: BPF prog-id=66 op=UNLOAD Dec 13 00:25:32.389000 audit: BPF prog-id=77 op=LOAD Dec 13 00:25:32.389000 audit: BPF prog-id=60 op=UNLOAD Dec 13 00:25:32.389000 audit: BPF prog-id=78 op=LOAD Dec 13 00:25:32.389000 audit: BPF prog-id=79 op=LOAD Dec 13 00:25:32.389000 audit: BPF prog-id=61 op=UNLOAD Dec 13 00:25:32.389000 audit: BPF prog-id=62 op=UNLOAD Dec 13 00:25:32.390000 audit: BPF prog-id=80 op=LOAD Dec 13 00:25:32.390000 audit: BPF prog-id=56 op=UNLOAD Dec 13 00:25:32.391000 audit: BPF prog-id=81 op=LOAD Dec 13 00:25:32.391000 audit: BPF prog-id=52 op=UNLOAD Dec 13 00:25:32.392000 audit: BPF prog-id=82 op=LOAD Dec 13 00:25:32.392000 audit: BPF prog-id=57 op=UNLOAD Dec 13 00:25:32.392000 audit: BPF prog-id=83 op=LOAD Dec 13 00:25:32.392000 audit: BPF prog-id=84 op=LOAD Dec 13 00:25:32.392000 audit: BPF prog-id=58 op=UNLOAD Dec 13 00:25:32.392000 audit: BPF prog-id=59 op=UNLOAD Dec 13 00:25:32.393000 audit: BPF prog-id=85 op=LOAD Dec 13 00:25:32.393000 audit: BPF prog-id=86 op=LOAD Dec 13 00:25:32.393000 audit: BPF prog-id=47 op=UNLOAD Dec 13 00:25:32.393000 audit: BPF prog-id=48 op=UNLOAD Dec 13 00:25:32.415955 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 00:25:32.416076 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 00:25:32.416431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:32.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:25:32.416490 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.4M memory peak. Dec 13 00:25:32.418342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:32.653227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:32.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:32.657707 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:25:32.702618 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:25:32.702618 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:25:32.703019 kubelet[2455]: I1213 00:25:32.702673 2455 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:25:33.013626 kubelet[2455]: I1213 00:25:33.013577 2455 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 13 00:25:33.013626 kubelet[2455]: I1213 00:25:33.013609 2455 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:25:33.015465 kubelet[2455]: I1213 00:25:33.015440 2455 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 13 00:25:33.015465 kubelet[2455]: I1213 00:25:33.015455 2455 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:25:33.015738 kubelet[2455]: I1213 00:25:33.015712 2455 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:25:33.449086 kubelet[2455]: I1213 00:25:33.448936 2455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:25:33.449802 kubelet[2455]: E1213 00:25:33.449750 2455 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 13 00:25:33.453220 kubelet[2455]: I1213 00:25:33.453187 2455 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:25:33.458931 kubelet[2455]: I1213 00:25:33.458884 2455 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 13 00:25:33.459764 kubelet[2455]: I1213 00:25:33.459708 2455 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:25:33.459912 kubelet[2455]: I1213 00:25:33.459746 2455 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:25:33.460043 kubelet[2455]: I1213 00:25:33.459913 2455 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:25:33.460043 kubelet[2455]: I1213 00:25:33.459922 2455 container_manager_linux.go:306] "Creating device plugin manager" Dec 13 00:25:33.460043 kubelet[2455]: I1213 00:25:33.460039 2455 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 13 00:25:33.463434 kubelet[2455]: I1213 00:25:33.463420 2455 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:33.463635 kubelet[2455]: I1213 00:25:33.463611 2455 kubelet.go:475] "Attempting to sync node with API server" Dec 13 00:25:33.463635 kubelet[2455]: I1213 00:25:33.463627 2455 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:25:33.463687 kubelet[2455]: I1213 00:25:33.463652 2455 kubelet.go:387] "Adding apiserver pod source" Dec 13 00:25:33.463687 kubelet[2455]: I1213 00:25:33.463672 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:25:33.464324 kubelet[2455]: E1213 00:25:33.464291 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:25:33.464731 kubelet[2455]: E1213 00:25:33.464693 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:25:33.466870 kubelet[2455]: I1213 00:25:33.466832 2455 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:25:33.467444 kubelet[2455]: I1213 00:25:33.467408 2455 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:25:33.467444 kubelet[2455]: I1213 00:25:33.467443 2455 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 13 00:25:33.467516 kubelet[2455]: W1213 00:25:33.467503 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 00:25:33.470991 kubelet[2455]: I1213 00:25:33.470960 2455 server.go:1262] "Started kubelet" Dec 13 00:25:33.471057 kubelet[2455]: I1213 00:25:33.471019 2455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:25:33.472306 kubelet[2455]: I1213 00:25:33.472005 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:25:33.474538 kubelet[2455]: I1213 00:25:33.474366 2455 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:25:33.474600 kubelet[2455]: I1213 00:25:33.474568 2455 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 13 00:25:33.474871 kubelet[2455]: I1213 00:25:33.474848 2455 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:25:33.476908 kubelet[2455]: I1213 00:25:33.476882 2455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:25:33.477859 kubelet[2455]: I1213 00:25:33.477826 2455 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 13 00:25:33.478124 kubelet[2455]: E1213 00:25:33.478097 2455 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:25:33.486677 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 13 00:25:33.486807 kernel: audit: type=1325 audit(1765585533.480:342): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.480000 audit[2472]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.486920 kubelet[2455]: I1213 00:25:33.482070 2455 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 00:25:33.486920 kubelet[2455]: I1213 00:25:33.482174 2455 reconciler.go:29] "Reconciler: start to sync state" Dec 13 00:25:33.486920 kubelet[2455]: E1213 00:25:33.483049 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:25:33.486920 kubelet[2455]: I1213 00:25:33.483504 2455 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:25:33.486920 kubelet[2455]: I1213 00:25:33.483597 2455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:25:33.487115 kubelet[2455]: E1213 00:25:33.485592 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809eb02f1a783a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:25:33.470931002 +0000 UTC m=+0.809813531,LastTimestamp:2025-12-13 00:25:33.470931002 +0000 UTC m=+0.809813531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:25:33.487319 kubelet[2455]: E1213 00:25:33.487289 2455 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:25:33.487454 kubelet[2455]: I1213 00:25:33.487430 2455 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:25:33.488391 kubelet[2455]: I1213 00:25:33.488364 2455 server.go:310] "Adding debug handlers to kubelet server" Dec 13 00:25:33.480000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff96d24250 a2=0 a3=0 items=0 ppid=2455 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.490354 kubelet[2455]: E1213 00:25:33.489890 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" Dec 13 00:25:33.496288 kernel: audit: type=1300 audit(1765585533.480:342): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff96d24250 a2=0 a3=0 items=0 ppid=2455 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:33.501274 kernel: audit: type=1327 audit(1765585533.480:342): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:33.501321 kernel: audit: type=1325 audit(1765585533.481:343): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.481000 audit[2473]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.481000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8b659750 a2=0 a3=0 items=0 ppid=2455 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.507582 kubelet[2455]: I1213 00:25:33.507381 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 13 00:25:33.509456 kubelet[2455]: I1213 00:25:33.509424 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 13 00:25:33.510095 kubelet[2455]: I1213 00:25:33.509761 2455 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 13 00:25:33.510095 kubelet[2455]: I1213 00:25:33.509791 2455 kubelet.go:2427] "Starting kubelet main sync loop" Dec 13 00:25:33.510095 kubelet[2455]: E1213 00:25:33.509843 2455 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:25:33.510095 kubelet[2455]: I1213 00:25:33.509957 2455 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:25:33.510095 kubelet[2455]: I1213 00:25:33.509982 2455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:25:33.510095 kubelet[2455]: I1213 00:25:33.509997 2455 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:33.510748 kernel: audit: type=1300 audit(1765585533.481:343): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8b659750 a2=0 a3=0 items=0 ppid=2455 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.511741 kubelet[2455]: E1213 00:25:33.511715 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:25:33.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:25:33.483000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.518039 kernel: audit: type=1327 audit(1765585533.481:343): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:25:33.518079 kernel: audit: type=1325 audit(1765585533.483:344): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.518119 kernel: audit: type=1300 audit(1765585533.483:344): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4eec62f0 a2=0 a3=0 items=0 ppid=2455 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.483000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4eec62f0 a2=0 a3=0 items=0 ppid=2455 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.518219 kubelet[2455]: I1213 00:25:33.518178 2455 policy_none.go:49] "None policy: Start" Dec 13 00:25:33.518219 kubelet[2455]: I1213 00:25:33.518203 2455 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 13 00:25:33.518313 kubelet[2455]: I1213 00:25:33.518225 2455 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 13 00:25:33.520921 kubelet[2455]: I1213 00:25:33.520897 2455 policy_none.go:47] "Start" Dec 13 00:25:33.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:33.526808 kernel: audit: type=1327 audit(1765585533.483:344): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:33.526842 kernel: audit: type=1325 audit(1765585533.486:345): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.486000 audit[2477]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.528780 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 00:25:33.486000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc33fa7b10 a2=0 a3=0 items=0 ppid=2455 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:33.505000 audit[2485]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.505000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff42dfced0 a2=0 a3=0 items=0 ppid=2455 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 13 00:25:33.507000 audit[2487]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:33.507000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff369d4090 a2=0 a3=0 items=0 ppid=2455 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:25:33.509000 audit[2488]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.509000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef7ad0940 a2=0 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:25:33.509000 audit[2489]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:33.509000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe66816fb0 a2=0 a3=0 items=0 ppid=2455 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:25:33.511000 audit[2490]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.511000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc86d3570 a2=0 a3=0 items=0 ppid=2455 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:25:33.511000 audit[2491]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:33.511000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb9ceb60 a2=0 a3=0 items=0 ppid=2455 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:25:33.512000 audit[2492]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:33.512000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9aff79f0 a2=0 a3=0 items=0 ppid=2455 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:25:33.515000 audit[2493]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:33.515000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff01c971e0 a2=0 a3=0 items=0 ppid=2455 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:33.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:25:33.548132 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 00:25:33.551365 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 00:25:33.574629 kubelet[2455]: E1213 00:25:33.574593 2455 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:25:33.574930 kubelet[2455]: I1213 00:25:33.574864 2455 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:25:33.574930 kubelet[2455]: I1213 00:25:33.574876 2455 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:25:33.575208 kubelet[2455]: I1213 00:25:33.575178 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:25:33.575957 kubelet[2455]: E1213 00:25:33.575940 2455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:25:33.576043 kubelet[2455]: E1213 00:25:33.576004 2455 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 00:25:33.626805 systemd[1]: Created slice kubepods-burstable-pod7454d231c50d5cac6d5acf26c8573dc9.slice - libcontainer container kubepods-burstable-pod7454d231c50d5cac6d5acf26c8573dc9.slice. Dec 13 00:25:33.645707 kubelet[2455]: E1213 00:25:33.645654 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:33.649006 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 13 00:25:33.651429 kubelet[2455]: E1213 00:25:33.651381 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:33.654144 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 13 00:25:33.656385 kubelet[2455]: E1213 00:25:33.656358 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:33.677062 kubelet[2455]: I1213 00:25:33.677024 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:33.677543 kubelet[2455]: E1213 00:25:33.677494 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Dec 13 00:25:33.690658 kubelet[2455]: E1213 00:25:33.690597 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" Dec 13 00:25:33.783157 kubelet[2455]: I1213 00:25:33.783083 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:33.783157 kubelet[2455]: I1213 00:25:33.783154 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:33.783723 kubelet[2455]: I1213 00:25:33.783176 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:33.783723 kubelet[2455]: I1213 00:25:33.783209 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:33.783723 kubelet[2455]: I1213 00:25:33.783272 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:33.783723 kubelet[2455]: I1213 00:25:33.783336 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:33.783723 kubelet[2455]: I1213 00:25:33.783396 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:33.783898 kubelet[2455]: I1213 00:25:33.783431 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:33.783898 kubelet[2455]: I1213 00:25:33.783449 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:33.879935 kubelet[2455]: I1213 00:25:33.879883 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:33.880328 kubelet[2455]: E1213 00:25:33.880285 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Dec 13 00:25:33.950309 kubelet[2455]: E1213 00:25:33.950199 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:33.951664 containerd[1658]: time="2025-12-13T00:25:33.951614815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7454d231c50d5cac6d5acf26c8573dc9,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:33.955620 kubelet[2455]: E1213 00:25:33.955587 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:33.956124 containerd[1658]: time="2025-12-13T00:25:33.956069129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:33.961049 kubelet[2455]: E1213 00:25:33.960997 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:33.961639 containerd[1658]: time="2025-12-13T00:25:33.961602917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:34.092126 kubelet[2455]: E1213 00:25:34.091978 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" Dec 13 00:25:34.268694 kubelet[2455]: E1213 00:25:34.268634 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 13 00:25:34.282207 kubelet[2455]: I1213 00:25:34.282165 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:34.282611 kubelet[2455]: E1213 00:25:34.282556 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Dec 13 00:25:34.298639 kubelet[2455]: E1213 00:25:34.298540 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 13 00:25:34.316286 kubelet[2455]: E1213 00:25:34.316200 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 13 00:25:34.754118 kubelet[2455]: E1213 00:25:34.754068 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 13 00:25:34.886758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672811704.mount: Deactivated successfully. Dec 13 00:25:34.892486 containerd[1658]: time="2025-12-13T00:25:34.892442974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:34.893083 kubelet[2455]: E1213 00:25:34.893052 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" Dec 13 00:25:34.894583 containerd[1658]: time="2025-12-13T00:25:34.894478221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:25:34.896387 kubelet[2455]: E1213 00:25:34.896282 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809eb02f1a783a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:25:33.470931002 +0000 UTC m=+0.809813531,LastTimestamp:2025-12-13 00:25:33.470931002 +0000 UTC m=+0.809813531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:25:34.898831 containerd[1658]: time="2025-12-13T00:25:34.898792652Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:34.899924 containerd[1658]: time="2025-12-13T00:25:34.899893226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:34.900960 containerd[1658]: time="2025-12-13T00:25:34.900906065Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:34.901795 containerd[1658]: time="2025-12-13T00:25:34.901766900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:25:34.902690 containerd[1658]: time="2025-12-13T00:25:34.902638875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:25:34.903537 containerd[1658]: time="2025-12-13T00:25:34.903506512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 942.095927ms" Dec 13 00:25:34.903871 containerd[1658]: time="2025-12-13T00:25:34.903844075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:25:34.907441 containerd[1658]: time="2025-12-13T00:25:34.907395636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 951.537693ms" Dec 13 00:25:34.908118 containerd[1658]: time="2025-12-13T00:25:34.908052097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 943.298041ms" Dec 13 00:25:35.022273 containerd[1658]: time="2025-12-13T00:25:35.021952871Z" level=info msg="connecting to shim 174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f" address="unix:///run/containerd/s/114155aa7c821ec800c77dbf9026e5ba40914f8bf59ff40dbff1cdf2772efa2b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:35.033571 containerd[1658]: time="2025-12-13T00:25:35.033515515Z" level=info msg="connecting to shim 07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506" address="unix:///run/containerd/s/7196f9009c34217550df25c2150b45cbbc0792d9d067fa03243974fbbdc2c1f6" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:35.036784 containerd[1658]: time="2025-12-13T00:25:35.036725064Z" level=info msg="connecting to shim 050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d" address="unix:///run/containerd/s/40e006295a89c23d640e1c5369958296c87737f2c624425b28755605d1bddfc2" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:35.103832 kubelet[2455]: I1213 00:25:35.103755 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:35.110048 kubelet[2455]: E1213 00:25:35.109480 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Dec 13 00:25:35.115557 systemd[1]: Started cri-containerd-174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f.scope - libcontainer container 174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f. Dec 13 00:25:35.121640 systemd[1]: Started cri-containerd-050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d.scope - libcontainer container 050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d. Dec 13 00:25:35.123207 systemd[1]: Started cri-containerd-07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506.scope - libcontainer container 07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506. Dec 13 00:25:35.139000 audit: BPF prog-id=87 op=LOAD Dec 13 00:25:35.139000 audit: BPF prog-id=88 op=LOAD Dec 13 00:25:35.139000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=88 op=UNLOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=89 op=LOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=90 op=LOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=90 op=UNLOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=89 op=UNLOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.140000 audit: BPF prog-id=91 op=LOAD Dec 13 00:25:35.140000 audit[2537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2509 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137346330323631306636626465666663396237303232343664316562 Dec 13 00:25:35.148000 audit: BPF prog-id=92 op=LOAD Dec 13 00:25:35.149000 audit: BPF prog-id=93 op=LOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=93 op=UNLOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=94 op=LOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=95 op=LOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=95 op=UNLOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=94 op=UNLOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.149000 audit: BPF prog-id=96 op=LOAD Dec 13 00:25:35.149000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2531 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037646361366435656338376634636163356662303666393661623661 Dec 13 00:25:35.150000 audit: BPF prog-id=97 op=LOAD Dec 13 00:25:35.151000 audit: BPF prog-id=98 op=LOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=98 op=UNLOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=99 op=LOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=100 op=LOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=100 op=UNLOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=99 op=UNLOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.151000 audit: BPF prog-id=101 op=LOAD Dec 13 00:25:35.151000 audit[2562]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035306633633730323732383732653338383961333763303964386439 Dec 13 00:25:35.224472 containerd[1658]: time="2025-12-13T00:25:35.224436781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7454d231c50d5cac6d5acf26c8573dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d\"" Dec 13 00:25:35.225951 kubelet[2455]: E1213 00:25:35.225926 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:35.229048 containerd[1658]: time="2025-12-13T00:25:35.229019164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506\"" Dec 13 00:25:35.231157 kubelet[2455]: E1213 00:25:35.230944 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:35.234055 containerd[1658]: time="2025-12-13T00:25:35.234024541Z" level=info msg="CreateContainer within sandbox \"050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 00:25:35.234868 containerd[1658]: time="2025-12-13T00:25:35.234831845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f\"" Dec 13 00:25:35.237013 containerd[1658]: time="2025-12-13T00:25:35.236977760Z" level=info msg="CreateContainer within sandbox \"07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 00:25:35.237726 kubelet[2455]: E1213 00:25:35.237706 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:35.242889 containerd[1658]: time="2025-12-13T00:25:35.242861293Z" level=info msg="CreateContainer within sandbox \"174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 00:25:35.248700 containerd[1658]: time="2025-12-13T00:25:35.248659798Z" level=info msg="Container c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:35.257386 containerd[1658]: time="2025-12-13T00:25:35.257328695Z" level=info msg="Container 1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:35.261641 containerd[1658]: time="2025-12-13T00:25:35.261602720Z" level=info msg="Container 0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:35.261772 containerd[1658]: time="2025-12-13T00:25:35.261712606Z" level=info msg="CreateContainer within sandbox \"050f3c70272872e3889a37c09d8d952ab305514939d57c2d9e9b10d84413587d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54\"" Dec 13 00:25:35.262591 containerd[1658]: time="2025-12-13T00:25:35.262562310Z" level=info msg="StartContainer for \"c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54\"" Dec 13 00:25:35.264058 containerd[1658]: time="2025-12-13T00:25:35.264025283Z" level=info msg="connecting to shim c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54" address="unix:///run/containerd/s/40e006295a89c23d640e1c5369958296c87737f2c624425b28755605d1bddfc2" protocol=ttrpc version=3 Dec 13 00:25:35.266800 containerd[1658]: time="2025-12-13T00:25:35.266759070Z" level=info msg="CreateContainer within sandbox \"07dca6d5ec87f4cac5fb06f96ab6a60444ee5e7e906d69b1973a968844d34506\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e\"" Dec 13 00:25:35.267332 containerd[1658]: time="2025-12-13T00:25:35.267297019Z" level=info msg="StartContainer for \"1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e\"" Dec 13 00:25:35.268584 containerd[1658]: time="2025-12-13T00:25:35.268538157Z" level=info msg="connecting to shim 1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e" address="unix:///run/containerd/s/7196f9009c34217550df25c2150b45cbbc0792d9d067fa03243974fbbdc2c1f6" protocol=ttrpc version=3 Dec 13 00:25:35.279013 containerd[1658]: time="2025-12-13T00:25:35.277013611Z" level=info msg="CreateContainer within sandbox \"174c02610f6bdeffc9b702246d1eb1f21c31c3839cc8372e925a8b2a740ee98f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca\"" Dec 13 00:25:35.280505 containerd[1658]: time="2025-12-13T00:25:35.280012064Z" level=info msg="StartContainer for \"0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca\"" Dec 13 00:25:35.281797 containerd[1658]: time="2025-12-13T00:25:35.281768448Z" level=info msg="connecting to shim 0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca" address="unix:///run/containerd/s/114155aa7c821ec800c77dbf9026e5ba40914f8bf59ff40dbff1cdf2772efa2b" protocol=ttrpc version=3 Dec 13 00:25:35.357479 systemd[1]: Started cri-containerd-0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca.scope - libcontainer container 0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca. Dec 13 00:25:35.359211 systemd[1]: Started cri-containerd-1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e.scope - libcontainer container 1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e. Dec 13 00:25:35.360626 systemd[1]: Started cri-containerd-c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54.scope - libcontainer container c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54. Dec 13 00:25:35.379000 audit: BPF prog-id=102 op=LOAD Dec 13 00:25:35.380000 audit: BPF prog-id=103 op=LOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=103 op=UNLOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=104 op=LOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=105 op=LOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=105 op=UNLOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=104 op=UNLOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.380000 audit: BPF prog-id=106 op=LOAD Dec 13 00:25:35.380000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2534 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338333936643638346262333937656130343162626366613766653439 Dec 13 00:25:35.381000 audit: BPF prog-id=107 op=LOAD Dec 13 00:25:35.381000 audit: BPF prog-id=108 op=LOAD Dec 13 00:25:35.381000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.381000 audit: BPF prog-id=108 op=UNLOAD Dec 13 00:25:35.381000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.381000 audit: BPF prog-id=109 op=LOAD Dec 13 00:25:35.381000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.381000 audit: BPF prog-id=110 op=LOAD Dec 13 00:25:35.381000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.382000 audit: BPF prog-id=110 op=UNLOAD Dec 13 00:25:35.382000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.382000 audit: BPF prog-id=109 op=UNLOAD Dec 13 00:25:35.382000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.382000 audit: BPF prog-id=111 op=LOAD Dec 13 00:25:35.382000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2509 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062663733396162643032666666336537336164373331363265336266 Dec 13 00:25:35.384000 audit: BPF prog-id=112 op=LOAD Dec 13 00:25:35.385000 audit: BPF prog-id=113 op=LOAD Dec 13 00:25:35.385000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.385000 audit: BPF prog-id=113 op=UNLOAD Dec 13 00:25:35.385000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.385000 audit: BPF prog-id=114 op=LOAD Dec 13 00:25:35.385000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.386000 audit: BPF prog-id=115 op=LOAD Dec 13 00:25:35.386000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.386000 audit: BPF prog-id=115 op=UNLOAD Dec 13 00:25:35.386000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.386000 audit: BPF prog-id=114 op=UNLOAD Dec 13 00:25:35.386000 audit[2641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.386000 audit: BPF prog-id=116 op=LOAD Dec 13 00:25:35.386000 audit[2641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2531 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:35.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136373165626164376533343939333639393063383431343864396139 Dec 13 00:25:35.439671 containerd[1658]: time="2025-12-13T00:25:35.439604507Z" level=info msg="StartContainer for \"0bf739abd02fff3e73ad73162e3bfa32777030f46f3756e92e29f447f23c29ca\" returns successfully" Dec 13 00:25:35.446241 containerd[1658]: time="2025-12-13T00:25:35.446130025Z" level=info msg="StartContainer for \"c8396d684bb397ea041bbcfa7fe49396ac4abb129b70b30ce50be8953feb5b54\" returns successfully" Dec 13 00:25:35.449328 containerd[1658]: time="2025-12-13T00:25:35.449277828Z" level=info msg="StartContainer for \"1671ebad7e349936990c84148d9a98befa093b5dd723733a5bdc3a119891295e\" returns successfully" Dec 13 00:25:35.520817 kubelet[2455]: E1213 00:25:35.520769 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:35.520953 kubelet[2455]: E1213 00:25:35.520893 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:35.526131 kubelet[2455]: E1213 00:25:35.526091 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:35.526284 kubelet[2455]: E1213 00:25:35.526206 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:35.527670 kubelet[2455]: E1213 00:25:35.527653 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:35.528012 kubelet[2455]: E1213 00:25:35.527969 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:36.529415 kubelet[2455]: E1213 00:25:36.529366 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:36.530173 kubelet[2455]: E1213 00:25:36.529946 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:36.530173 kubelet[2455]: E1213 00:25:36.530108 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:25:36.530445 kubelet[2455]: E1213 00:25:36.530391 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:36.712055 kubelet[2455]: I1213 00:25:36.712014 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:37.067053 kubelet[2455]: E1213 00:25:37.067005 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 00:25:37.295685 kubelet[2455]: I1213 00:25:37.295648 2455 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:25:37.385945 kubelet[2455]: I1213 00:25:37.385737 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:37.449648 kubelet[2455]: E1213 00:25:37.449570 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:37.449648 kubelet[2455]: I1213 00:25:37.449634 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:37.453395 kubelet[2455]: E1213 00:25:37.453364 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:37.453395 kubelet[2455]: I1213 00:25:37.453395 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:37.454750 kubelet[2455]: E1213 00:25:37.454708 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:37.466991 kubelet[2455]: I1213 00:25:37.466956 2455 apiserver.go:52] "Watching apiserver" Dec 13 00:25:37.482897 kubelet[2455]: I1213 00:25:37.482752 2455 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 00:25:37.529690 kubelet[2455]: I1213 00:25:37.529646 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:37.531541 kubelet[2455]: E1213 00:25:37.531518 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:37.531704 kubelet[2455]: E1213 00:25:37.531687 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:39.632077 systemd[1]: Reload requested from client PID 2746 ('systemctl') (unit session-8.scope)... Dec 13 00:25:39.632092 systemd[1]: Reloading... Dec 13 00:25:39.717264 zram_generator::config[2792]: No configuration found. Dec 13 00:25:39.992202 systemd[1]: Reloading finished in 359 ms. Dec 13 00:25:40.017321 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:40.017618 kubelet[2455]: I1213 00:25:40.017377 2455 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:25:40.041673 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:25:40.042048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:40.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:40.042116 systemd[1]: kubelet.service: Consumed 931ms CPU time, 126.4M memory peak. Dec 13 00:25:40.043150 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 13 00:25:40.043225 kernel: audit: type=1131 audit(1765585540.040:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:40.044152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:25:40.043000 audit: BPF prog-id=117 op=LOAD Dec 13 00:25:40.047980 kernel: audit: type=1334 audit(1765585540.043:403): prog-id=117 op=LOAD Dec 13 00:25:40.048030 kernel: audit: type=1334 audit(1765585540.043:404): prog-id=70 op=UNLOAD Dec 13 00:25:40.043000 audit: BPF prog-id=70 op=UNLOAD Dec 13 00:25:40.046000 audit: BPF prog-id=118 op=LOAD Dec 13 00:25:40.050566 kernel: audit: type=1334 audit(1765585540.046:405): prog-id=118 op=LOAD Dec 13 00:25:40.050596 kernel: audit: type=1334 audit(1765585540.046:406): prog-id=74 op=UNLOAD Dec 13 00:25:40.046000 audit: BPF prog-id=74 op=UNLOAD Dec 13 00:25:40.046000 audit: BPF prog-id=119 op=LOAD Dec 13 00:25:40.046000 audit: BPF prog-id=120 op=LOAD Dec 13 00:25:40.046000 audit: BPF prog-id=75 op=UNLOAD Dec 13 00:25:40.046000 audit: BPF prog-id=76 op=UNLOAD Dec 13 00:25:40.047000 audit: BPF prog-id=121 op=LOAD Dec 13 00:25:40.053545 kernel: audit: type=1334 audit(1765585540.046:407): prog-id=119 op=LOAD Dec 13 00:25:40.053574 kernel: audit: type=1334 audit(1765585540.046:408): prog-id=120 op=LOAD Dec 13 00:25:40.053593 kernel: audit: type=1334 audit(1765585540.046:409): prog-id=75 op=UNLOAD Dec 13 00:25:40.053606 kernel: audit: type=1334 audit(1765585540.046:410): prog-id=76 op=UNLOAD Dec 13 00:25:40.053619 kernel: audit: type=1334 audit(1765585540.047:411): prog-id=121 op=LOAD Dec 13 00:25:40.047000 audit: BPF prog-id=81 op=UNLOAD Dec 13 00:25:40.048000 audit: BPF prog-id=122 op=LOAD Dec 13 00:25:40.048000 audit: BPF prog-id=82 op=UNLOAD Dec 13 00:25:40.048000 audit: BPF prog-id=123 op=LOAD Dec 13 00:25:40.048000 audit: BPF prog-id=124 op=LOAD Dec 13 00:25:40.048000 audit: BPF prog-id=83 op=UNLOAD Dec 13 00:25:40.048000 audit: BPF prog-id=84 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=125 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=80 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=126 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=127 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=85 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=86 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=128 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=71 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=129 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=130 op=LOAD Dec 13 00:25:40.050000 audit: BPF prog-id=72 op=UNLOAD Dec 13 00:25:40.050000 audit: BPF prog-id=73 op=UNLOAD Dec 13 00:25:40.054000 audit: BPF prog-id=131 op=LOAD Dec 13 00:25:40.054000 audit: BPF prog-id=67 op=UNLOAD Dec 13 00:25:40.054000 audit: BPF prog-id=132 op=LOAD Dec 13 00:25:40.054000 audit: BPF prog-id=133 op=LOAD Dec 13 00:25:40.054000 audit: BPF prog-id=68 op=UNLOAD Dec 13 00:25:40.054000 audit: BPF prog-id=69 op=UNLOAD Dec 13 00:25:40.054000 audit: BPF prog-id=134 op=LOAD Dec 13 00:25:40.054000 audit: BPF prog-id=77 op=UNLOAD Dec 13 00:25:40.054000 audit: BPF prog-id=135 op=LOAD Dec 13 00:25:40.067000 audit: BPF prog-id=136 op=LOAD Dec 13 00:25:40.067000 audit: BPF prog-id=78 op=UNLOAD Dec 13 00:25:40.067000 audit: BPF prog-id=79 op=UNLOAD Dec 13 00:25:40.307549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:25:40.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:25:40.317538 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:25:40.408589 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:25:40.408589 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:25:40.408969 kubelet[2837]: I1213 00:25:40.408638 2837 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:25:40.414727 kubelet[2837]: I1213 00:25:40.414688 2837 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 13 00:25:40.414727 kubelet[2837]: I1213 00:25:40.414718 2837 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:25:40.414794 kubelet[2837]: I1213 00:25:40.414748 2837 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 13 00:25:40.414794 kubelet[2837]: I1213 00:25:40.414760 2837 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:25:40.415020 kubelet[2837]: I1213 00:25:40.414997 2837 server.go:956] "Client rotation is on, will bootstrap in background" Dec 13 00:25:40.416137 kubelet[2837]: I1213 00:25:40.416121 2837 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 13 00:25:40.418485 kubelet[2837]: I1213 00:25:40.418446 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:25:40.423631 kubelet[2837]: I1213 00:25:40.423593 2837 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:25:40.428460 kubelet[2837]: I1213 00:25:40.428418 2837 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 13 00:25:40.428743 kubelet[2837]: I1213 00:25:40.428701 2837 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:25:40.428901 kubelet[2837]: I1213 00:25:40.428729 2837 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:25:40.428901 kubelet[2837]: I1213 00:25:40.428895 2837 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:25:40.428901 kubelet[2837]: I1213 00:25:40.428904 2837 container_manager_linux.go:306] "Creating device plugin manager" Dec 13 00:25:40.429071 kubelet[2837]: I1213 00:25:40.428932 2837 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 13 00:25:40.429801 kubelet[2837]: I1213 00:25:40.429769 2837 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:40.429977 kubelet[2837]: I1213 00:25:40.429950 2837 kubelet.go:475] "Attempting to sync node with API server" Dec 13 00:25:40.429977 kubelet[2837]: I1213 00:25:40.429975 2837 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:25:40.430036 kubelet[2837]: I1213 00:25:40.429999 2837 kubelet.go:387] "Adding apiserver pod source" Dec 13 00:25:40.430036 kubelet[2837]: I1213 00:25:40.430018 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:25:40.430806 kubelet[2837]: I1213 00:25:40.430782 2837 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:25:40.431370 kubelet[2837]: I1213 00:25:40.431344 2837 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 13 00:25:40.431424 kubelet[2837]: I1213 00:25:40.431380 2837 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 13 00:25:40.435679 kubelet[2837]: I1213 00:25:40.435624 2837 server.go:1262] "Started kubelet" Dec 13 00:25:40.436947 kubelet[2837]: I1213 00:25:40.436929 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:25:40.442007 kubelet[2837]: I1213 00:25:40.441966 2837 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:25:40.444409 kubelet[2837]: I1213 00:25:40.443774 2837 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 13 00:25:40.446732 kubelet[2837]: I1213 00:25:40.446701 2837 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 00:25:40.446913 kubelet[2837]: I1213 00:25:40.446892 2837 reconciler.go:29] "Reconciler: start to sync state" Dec 13 00:25:40.447780 kubelet[2837]: I1213 00:25:40.447725 2837 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:25:40.447909 kubelet[2837]: I1213 00:25:40.447793 2837 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 13 00:25:40.448052 kubelet[2837]: I1213 00:25:40.448035 2837 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:25:40.449786 kubelet[2837]: E1213 00:25:40.449760 2837 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:25:40.450694 kubelet[2837]: I1213 00:25:40.450644 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:25:40.452345 kubelet[2837]: I1213 00:25:40.452308 2837 factory.go:223] Registration of the systemd container factory successfully Dec 13 00:25:40.452457 kubelet[2837]: I1213 00:25:40.452416 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:25:40.454092 kubelet[2837]: I1213 00:25:40.454056 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 13 00:25:40.454736 kubelet[2837]: I1213 00:25:40.454713 2837 factory.go:223] Registration of the containerd container factory successfully Dec 13 00:25:40.460019 kubelet[2837]: I1213 00:25:40.459995 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 13 00:25:40.460121 kubelet[2837]: I1213 00:25:40.460112 2837 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 13 00:25:40.460196 kubelet[2837]: I1213 00:25:40.460188 2837 kubelet.go:2427] "Starting kubelet main sync loop" Dec 13 00:25:40.460309 kubelet[2837]: E1213 00:25:40.460291 2837 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:25:40.561307 kubelet[2837]: E1213 00:25:40.561157 2837 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:25:40.761891 kubelet[2837]: E1213 00:25:40.761834 2837 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:25:41.162432 kubelet[2837]: E1213 00:25:41.162362 2837 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:25:41.205979 kubelet[2837]: I1213 00:25:41.205927 2837 server.go:310] "Adding debug handlers to kubelet server" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237577 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237593 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237610 2837 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237721 2837 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237729 2837 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237745 2837 policy_none.go:49] "None policy: Start" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237753 2837 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 13 00:25:41.237838 kubelet[2837]: I1213 00:25:41.237762 2837 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 13 00:25:41.238164 kubelet[2837]: I1213 00:25:41.237889 2837 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 13 00:25:41.238164 kubelet[2837]: I1213 00:25:41.237897 2837 policy_none.go:47] "Start" Dec 13 00:25:41.241884 kubelet[2837]: E1213 00:25:41.241806 2837 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 13 00:25:41.242033 kubelet[2837]: I1213 00:25:41.242005 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:25:41.242096 kubelet[2837]: I1213 00:25:41.242019 2837 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:25:41.242225 kubelet[2837]: I1213 00:25:41.242201 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:25:41.245011 kubelet[2837]: E1213 00:25:41.244987 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:25:41.348891 kubelet[2837]: I1213 00:25:41.348837 2837 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:25:41.431462 kubelet[2837]: I1213 00:25:41.431345 2837 apiserver.go:52] "Watching apiserver" Dec 13 00:25:41.963536 kubelet[2837]: I1213 00:25:41.963501 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:41.963665 kubelet[2837]: I1213 00:25:41.963564 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:41.963665 kubelet[2837]: I1213 00:25:41.963510 2837 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:42.047133 kubelet[2837]: I1213 00:25:42.047088 2837 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 00:25:42.056194 kubelet[2837]: I1213 00:25:42.056146 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:42.056194 kubelet[2837]: I1213 00:25:42.056170 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:42.056194 kubelet[2837]: I1213 00:25:42.056187 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:42.056194 kubelet[2837]: I1213 00:25:42.056202 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:42.056475 kubelet[2837]: I1213 00:25:42.056258 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:42.056475 kubelet[2837]: I1213 00:25:42.056309 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:25:42.056475 kubelet[2837]: I1213 00:25:42.056336 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:25:42.056475 kubelet[2837]: I1213 00:25:42.056351 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:42.056475 kubelet[2837]: I1213 00:25:42.056364 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7454d231c50d5cac6d5acf26c8573dc9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7454d231c50d5cac6d5acf26c8573dc9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:25:42.475684 kubelet[2837]: E1213 00:25:42.475555 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:42.476104 kubelet[2837]: E1213 00:25:42.475697 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:42.476574 kubelet[2837]: E1213 00:25:42.476554 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:42.711649 kubelet[2837]: I1213 00:25:42.711609 2837 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 00:25:42.712006 kubelet[2837]: I1213 00:25:42.711894 2837 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:25:42.794995 kubelet[2837]: I1213 00:25:42.794906 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.794883246 podStartE2EDuration="1.794883246s" podCreationTimestamp="2025-12-13 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:25:42.794429596 +0000 UTC m=+2.421965803" watchObservedRunningTime="2025-12-13 00:25:42.794883246 +0000 UTC m=+2.422419473" Dec 13 00:25:42.827913 kubelet[2837]: I1213 00:25:42.827570 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8275488119999999 podStartE2EDuration="1.827548812s" podCreationTimestamp="2025-12-13 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:25:42.815055808 +0000 UTC m=+2.442592025" watchObservedRunningTime="2025-12-13 00:25:42.827548812 +0000 UTC m=+2.455085019" Dec 13 00:25:43.221021 kubelet[2837]: E1213 00:25:43.220906 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:43.221703 kubelet[2837]: E1213 00:25:43.221674 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:43.221951 kubelet[2837]: E1213 00:25:43.221922 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:43.307445 kubelet[2837]: I1213 00:25:43.306529 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.30557328 podStartE2EDuration="2.30557328s" podCreationTimestamp="2025-12-13 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:25:42.848762517 +0000 UTC m=+2.476298714" watchObservedRunningTime="2025-12-13 00:25:43.30557328 +0000 UTC m=+2.933109488" Dec 13 00:25:44.222146 kubelet[2837]: E1213 00:25:44.222086 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:44.222725 kubelet[2837]: E1213 00:25:44.222174 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:45.225294 kubelet[2837]: E1213 00:25:45.225259 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:45.272643 kubelet[2837]: E1213 00:25:45.272589 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:46.225842 kubelet[2837]: E1213 00:25:46.225780 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:46.896928 kubelet[2837]: I1213 00:25:46.896882 2837 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 00:25:46.897384 containerd[1658]: time="2025-12-13T00:25:46.897326161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 00:25:46.897940 kubelet[2837]: I1213 00:25:46.897572 2837 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 00:25:48.640696 systemd[1]: Created slice kubepods-besteffort-podc325a64d_a836_4600_ac31_32f51e11e2c3.slice - libcontainer container kubepods-besteffort-podc325a64d_a836_4600_ac31_32f51e11e2c3.slice. Dec 13 00:25:48.669530 systemd[1]: Created slice kubepods-besteffort-pod4d9013cb_77e0_4d61_9595_a87f427bb761.slice - libcontainer container kubepods-besteffort-pod4d9013cb_77e0_4d61_9595_a87f427bb761.slice. Dec 13 00:25:48.670590 kubelet[2837]: E1213 00:25:48.669984 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:48.697687 kubelet[2837]: I1213 00:25:48.697636 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c325a64d-a836-4600-ac31-32f51e11e2c3-xtables-lock\") pod \"kube-proxy-wq4q4\" (UID: \"c325a64d-a836-4600-ac31-32f51e11e2c3\") " pod="kube-system/kube-proxy-wq4q4" Dec 13 00:25:48.697687 kubelet[2837]: I1213 00:25:48.697674 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9h8t\" (UniqueName: \"kubernetes.io/projected/4d9013cb-77e0-4d61-9595-a87f427bb761-kube-api-access-p9h8t\") pod \"tigera-operator-65cdcdfd6d-dpdrf\" (UID: \"4d9013cb-77e0-4d61-9595-a87f427bb761\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-dpdrf" Dec 13 00:25:48.697687 kubelet[2837]: I1213 00:25:48.697691 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d9013cb-77e0-4d61-9595-a87f427bb761-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-dpdrf\" (UID: \"4d9013cb-77e0-4d61-9595-a87f427bb761\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-dpdrf" Dec 13 00:25:48.697926 kubelet[2837]: I1213 00:25:48.697707 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c325a64d-a836-4600-ac31-32f51e11e2c3-lib-modules\") pod \"kube-proxy-wq4q4\" (UID: \"c325a64d-a836-4600-ac31-32f51e11e2c3\") " pod="kube-system/kube-proxy-wq4q4" Dec 13 00:25:48.697926 kubelet[2837]: I1213 00:25:48.697753 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4stcv\" (UniqueName: \"kubernetes.io/projected/c325a64d-a836-4600-ac31-32f51e11e2c3-kube-api-access-4stcv\") pod \"kube-proxy-wq4q4\" (UID: \"c325a64d-a836-4600-ac31-32f51e11e2c3\") " pod="kube-system/kube-proxy-wq4q4" Dec 13 00:25:48.697926 kubelet[2837]: I1213 00:25:48.697811 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c325a64d-a836-4600-ac31-32f51e11e2c3-kube-proxy\") pod \"kube-proxy-wq4q4\" (UID: \"c325a64d-a836-4600-ac31-32f51e11e2c3\") " pod="kube-system/kube-proxy-wq4q4" Dec 13 00:25:48.955130 kubelet[2837]: E1213 00:25:48.954786 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:48.955383 containerd[1658]: time="2025-12-13T00:25:48.955304384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq4q4,Uid:c325a64d-a836-4600-ac31-32f51e11e2c3,Namespace:kube-system,Attempt:0,}" Dec 13 00:25:48.977850 containerd[1658]: time="2025-12-13T00:25:48.977801282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-dpdrf,Uid:4d9013cb-77e0-4d61-9595-a87f427bb761,Namespace:tigera-operator,Attempt:0,}" Dec 13 00:25:48.995924 containerd[1658]: time="2025-12-13T00:25:48.995854660Z" level=info msg="connecting to shim 711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a" address="unix:///run/containerd/s/828a3f937698d87b55ff58cf741caae92b4371517ec5f073b71e3f829dc59e50" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:49.023465 systemd[1]: Started cri-containerd-711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a.scope - libcontainer container 711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a. Dec 13 00:25:49.035000 audit: BPF prog-id=137 op=LOAD Dec 13 00:25:49.039940 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 00:25:49.040030 kernel: audit: type=1334 audit(1765585549.035:444): prog-id=137 op=LOAD Dec 13 00:25:49.040060 kernel: audit: type=1334 audit(1765585549.036:445): prog-id=138 op=LOAD Dec 13 00:25:49.036000 audit: BPF prog-id=138 op=LOAD Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.047469 kernel: audit: type=1300 audit(1765585549.036:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.047597 kernel: audit: type=1327 audit(1765585549.036:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.053496 kernel: audit: type=1334 audit(1765585549.036:446): prog-id=138 op=UNLOAD Dec 13 00:25:49.036000 audit: BPF prog-id=138 op=UNLOAD Dec 13 00:25:49.061850 kernel: audit: type=1300 audit(1765585549.036:446): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.068087 kernel: audit: type=1327 audit(1765585549.036:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: BPF prog-id=139 op=LOAD Dec 13 00:25:49.075477 kernel: audit: type=1334 audit(1765585549.036:447): prog-id=139 op=LOAD Dec 13 00:25:49.075556 kernel: audit: type=1300 audit(1765585549.036:447): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.076822 containerd[1658]: time="2025-12-13T00:25:49.076689081Z" level=info msg="connecting to shim 558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755" address="unix:///run/containerd/s/bc52d379a6b25a4e6f226ddbe5b8fc082e77822693e4234da7596a3dd9783315" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: BPF prog-id=140 op=LOAD Dec 13 00:25:49.083361 kernel: audit: type=1327 audit(1765585549.036:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: BPF prog-id=140 op=UNLOAD Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: BPF prog-id=139 op=UNLOAD Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.036000 audit: BPF prog-id=141 op=LOAD Dec 13 00:25:49.036000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2901 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.084819 containerd[1658]: time="2025-12-13T00:25:49.084772106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq4q4,Uid:c325a64d-a836-4600-ac31-32f51e11e2c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a\"" Dec 13 00:25:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313136336261323231623536316239663133393361316631656465 Dec 13 00:25:49.087609 kubelet[2837]: E1213 00:25:49.087564 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:49.101132 containerd[1658]: time="2025-12-13T00:25:49.101079614Z" level=info msg="CreateContainer within sandbox \"711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 00:25:49.113897 containerd[1658]: time="2025-12-13T00:25:49.113840149Z" level=info msg="Container 1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:49.115467 systemd[1]: Started cri-containerd-558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755.scope - libcontainer container 558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755. Dec 13 00:25:49.123161 containerd[1658]: time="2025-12-13T00:25:49.123036807Z" level=info msg="CreateContainer within sandbox \"711163ba221b561b9f1393a1f1ede1b073a81d20e2e75a00352144af550c7c3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586\"" Dec 13 00:25:49.123944 containerd[1658]: time="2025-12-13T00:25:49.123907661Z" level=info msg="StartContainer for \"1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586\"" Dec 13 00:25:49.127298 containerd[1658]: time="2025-12-13T00:25:49.127192350Z" level=info msg="connecting to shim 1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586" address="unix:///run/containerd/s/828a3f937698d87b55ff58cf741caae92b4371517ec5f073b71e3f829dc59e50" protocol=ttrpc version=3 Dec 13 00:25:49.127000 audit: BPF prog-id=142 op=LOAD Dec 13 00:25:49.128000 audit: BPF prog-id=143 op=LOAD Dec 13 00:25:49.128000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000172238 a2=98 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.128000 audit: BPF prog-id=143 op=UNLOAD Dec 13 00:25:49.128000 audit[2956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.129000 audit: BPF prog-id=144 op=LOAD Dec 13 00:25:49.129000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000172488 a2=98 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.129000 audit: BPF prog-id=145 op=LOAD Dec 13 00:25:49.129000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000172218 a2=98 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.129000 audit: BPF prog-id=145 op=UNLOAD Dec 13 00:25:49.129000 audit[2956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.129000 audit: BPF prog-id=144 op=UNLOAD Dec 13 00:25:49.129000 audit[2956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.129000 audit: BPF prog-id=146 op=LOAD Dec 13 00:25:49.129000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001726e8 a2=98 a3=0 items=0 ppid=2939 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535383036326537636631393737653034393334663239313366353531 Dec 13 00:25:49.149682 systemd[1]: Started cri-containerd-1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586.scope - libcontainer container 1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586. Dec 13 00:25:49.178178 containerd[1658]: time="2025-12-13T00:25:49.178139356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-dpdrf,Uid:4d9013cb-77e0-4d61-9595-a87f427bb761,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755\"" Dec 13 00:25:49.180534 containerd[1658]: time="2025-12-13T00:25:49.180224982Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 00:25:49.228000 audit: BPF prog-id=147 op=LOAD Dec 13 00:25:49.228000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2901 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623035373334373261623532346439346233383633316564356335 Dec 13 00:25:49.228000 audit: BPF prog-id=148 op=LOAD Dec 13 00:25:49.228000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2901 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623035373334373261623532346439346233383633316564356335 Dec 13 00:25:49.228000 audit: BPF prog-id=148 op=UNLOAD Dec 13 00:25:49.228000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623035373334373261623532346439346233383633316564356335 Dec 13 00:25:49.228000 audit: BPF prog-id=147 op=UNLOAD Dec 13 00:25:49.228000 audit[2976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2901 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623035373334373261623532346439346233383633316564356335 Dec 13 00:25:49.228000 audit: BPF prog-id=149 op=LOAD Dec 13 00:25:49.228000 audit[2976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2901 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161623035373334373261623532346439346233383633316564356335 Dec 13 00:25:49.236616 kubelet[2837]: E1213 00:25:49.236580 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:49.254963 containerd[1658]: time="2025-12-13T00:25:49.254919089Z" level=info msg="StartContainer for \"1ab0573472ab524d94b38631ed5c5a7a43c500f5687c493b48dae48e66eb6586\" returns successfully" Dec 13 00:25:49.515000 audit[3050]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.515000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd555f1700 a2=0 a3=7ffd555f16ec items=0 ppid=2990 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:25:49.517000 audit[3049]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.517000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb31948a0 a2=0 a3=7ffeb319488c items=0 ppid=2990 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:25:49.519000 audit[3053]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.519000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc61cead0 a2=0 a3=7ffcc61ceabc items=0 ppid=2990 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:25:49.520000 audit[3052]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.520000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeff120bb0 a2=0 a3=7ffeff120b9c items=0 ppid=2990 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:25:49.521000 audit[3055]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.521000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcaa55aa80 a2=0 a3=7ffcaa55aa6c items=0 ppid=2990 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:25:49.522000 audit[3056]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.522000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff399f6cc0 a2=0 a3=7fff399f6cac items=0 ppid=2990 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:25:49.619000 audit[3058]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.619000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd6233d9d0 a2=0 a3=7ffd6233d9bc items=0 ppid=2990 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:25:49.623000 audit[3060]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.623000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcfe0df540 a2=0 a3=7ffcfe0df52c items=0 ppid=2990 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 13 00:25:49.628000 audit[3063]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.628000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc6ed0bc60 a2=0 a3=7ffc6ed0bc4c items=0 ppid=2990 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 13 00:25:49.630000 audit[3064]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.630000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4e3a8660 a2=0 a3=7ffe4e3a864c items=0 ppid=2990 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:25:49.633000 audit[3066]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.633000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1b73d290 a2=0 a3=7fff1b73d27c items=0 ppid=2990 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:25:49.635000 audit[3067]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.635000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6bf7d600 a2=0 a3=7ffd6bf7d5ec items=0 ppid=2990 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:25:49.639000 audit[3069]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.639000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc60013080 a2=0 a3=7ffc6001306c items=0 ppid=2990 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.645000 audit[3072]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.645000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff2ceccdc0 a2=0 a3=7fff2ceccdac items=0 ppid=2990 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.646000 audit[3073]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.646000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd26f35870 a2=0 a3=7ffd26f3585c items=0 ppid=2990 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:25:49.649000 audit[3075]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.649000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffddea84c70 a2=0 a3=7ffddea84c5c items=0 ppid=2990 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:25:49.651000 audit[3076]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.651000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce0a6cc10 a2=0 a3=7ffce0a6cbfc items=0 ppid=2990 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:25:49.654000 audit[3078]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.654000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffbe719cb0 a2=0 a3=7fffbe719c9c items=0 ppid=2990 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 13 00:25:49.660000 audit[3081]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.660000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfec6dc70 a2=0 a3=7ffcfec6dc5c items=0 ppid=2990 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 13 00:25:49.665000 audit[3084]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.665000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe044e3af0 a2=0 a3=7ffe044e3adc items=0 ppid=2990 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 13 00:25:49.667000 audit[3085]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.667000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd01ed3b70 a2=0 a3=7ffd01ed3b5c items=0 ppid=2990 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:25:49.670000 audit[3087]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.670000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb92419b0 a2=0 a3=7ffdb924199c items=0 ppid=2990 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.675000 audit[3090]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.675000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe710b37c0 a2=0 a3=7ffe710b37ac items=0 ppid=2990 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.677000 audit[3091]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.677000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc65a2a4b0 a2=0 a3=7ffc65a2a49c items=0 ppid=2990 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:25:49.680000 audit[3093]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:25:49.680000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffeb27991e0 a2=0 a3=7ffeb27991cc items=0 ppid=2990 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:25:49.702000 audit[3099]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:25:49.702000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce3a71620 a2=0 a3=7ffce3a7160c items=0 ppid=2990 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:25:49.719000 audit[3099]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:25:49.719000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffce3a71620 a2=0 a3=7ffce3a7160c items=0 ppid=2990 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:25:49.721000 audit[3104]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.721000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd4ccf44f0 a2=0 a3=7ffd4ccf44dc items=0 ppid=2990 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:25:49.725000 audit[3106]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.725000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb5c0eb00 a2=0 a3=7ffdb5c0eaec items=0 ppid=2990 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 13 00:25:49.731000 audit[3109]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.731000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc4cfa4160 a2=0 a3=7ffc4cfa414c items=0 ppid=2990 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 13 00:25:49.733000 audit[3110]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.733000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5f07fb10 a2=0 a3=7ffc5f07fafc items=0 ppid=2990 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:25:49.736000 audit[3112]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.736000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc83681420 a2=0 a3=7ffc8368140c items=0 ppid=2990 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:25:49.738000 audit[3113]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.738000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4395a8d0 a2=0 a3=7ffe4395a8bc items=0 ppid=2990 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:25:49.741000 audit[3115]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.741000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe552854d0 a2=0 a3=7ffe552854bc items=0 ppid=2990 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.746000 audit[3118]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.746000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc7e35f0f0 a2=0 a3=7ffc7e35f0dc items=0 ppid=2990 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.746000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.748000 audit[3119]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.748000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb4d6f8a0 a2=0 a3=7fffb4d6f88c items=0 ppid=2990 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:25:49.752000 audit[3121]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.752000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd10572f10 a2=0 a3=7ffd10572efc items=0 ppid=2990 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:25:49.753000 audit[3122]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.753000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe27d59920 a2=0 a3=7ffe27d5990c items=0 ppid=2990 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:25:49.757000 audit[3124]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.757000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4d094820 a2=0 a3=7fff4d09480c items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 13 00:25:49.761000 audit[3127]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.761000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff33c52dd0 a2=0 a3=7fff33c52dbc items=0 ppid=2990 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 13 00:25:49.766000 audit[3130]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.766000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2a316f50 a2=0 a3=7fff2a316f3c items=0 ppid=2990 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 13 00:25:49.768000 audit[3131]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.768000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2b0290d0 a2=0 a3=7ffd2b0290bc items=0 ppid=2990 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:25:49.772000 audit[3133]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.772000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd494f14a0 a2=0 a3=7ffd494f148c items=0 ppid=2990 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.776000 audit[3136]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.776000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcee79c460 a2=0 a3=7ffcee79c44c items=0 ppid=2990 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:25:49.778000 audit[3137]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.778000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe57c12c0 a2=0 a3=7fffe57c12ac items=0 ppid=2990 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:25:49.781000 audit[3139]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.781000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffde1c47890 a2=0 a3=7ffde1c4787c items=0 ppid=2990 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:25:49.783000 audit[3140]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.783000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe265adcc0 a2=0 a3=7ffe265adcac items=0 ppid=2990 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:25:49.786000 audit[3142]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.786000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0d4a5d50 a2=0 a3=7ffe0d4a5d3c items=0 ppid=2990 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:49.791000 audit[3145]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:25:49.791000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffefaeec2c0 a2=0 a3=7ffefaeec2ac items=0 ppid=2990 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:25:49.795000 audit[3147]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:25:49.795000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff2106ea20 a2=0 a3=7fff2106ea0c items=0 ppid=2990 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.795000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:25:49.796000 audit[3147]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:25:49.796000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff2106ea20 a2=0 a3=7fff2106ea0c items=0 ppid=2990 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:49.796000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:25:50.240429 kubelet[2837]: E1213 00:25:50.240401 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:50.611806 update_engine[1636]: I20251213 00:25:50.611655 1636 update_attempter.cc:509] Updating boot flags... Dec 13 00:25:50.745146 kubelet[2837]: I1213 00:25:50.744491 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wq4q4" podStartSLOduration=2.744468921 podStartE2EDuration="2.744468921s" podCreationTimestamp="2025-12-13 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:25:50.744333826 +0000 UTC m=+10.371870043" watchObservedRunningTime="2025-12-13 00:25:50.744468921 +0000 UTC m=+10.372005128" Dec 13 00:25:51.242261 kubelet[2837]: E1213 00:25:51.242184 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:53.044993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541810005.mount: Deactivated successfully. Dec 13 00:25:54.007540 containerd[1658]: time="2025-12-13T00:25:54.007466585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:54.103758 containerd[1658]: time="2025-12-13T00:25:54.103659680Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558945" Dec 13 00:25:54.191132 containerd[1658]: time="2025-12-13T00:25:54.191044969Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:54.274824 containerd[1658]: time="2025-12-13T00:25:54.274686936Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:25:54.275468 containerd[1658]: time="2025-12-13T00:25:54.275435727Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.095057025s" Dec 13 00:25:54.275518 containerd[1658]: time="2025-12-13T00:25:54.275467557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 13 00:25:54.443398 kubelet[2837]: E1213 00:25:54.443320 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:25:54.443861 containerd[1658]: time="2025-12-13T00:25:54.443332385Z" level=info msg="CreateContainer within sandbox \"558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 00:25:54.644764 containerd[1658]: time="2025-12-13T00:25:54.644429656Z" level=info msg="Container c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:25:54.655546 containerd[1658]: time="2025-12-13T00:25:54.655483722Z" level=info msg="CreateContainer within sandbox \"558062e7cf1977e04934f2913f551e243544abdb78af040497186f813fb62755\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf\"" Dec 13 00:25:54.656183 containerd[1658]: time="2025-12-13T00:25:54.656135931Z" level=info msg="StartContainer for \"c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf\"" Dec 13 00:25:54.657252 containerd[1658]: time="2025-12-13T00:25:54.657188704Z" level=info msg="connecting to shim c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf" address="unix:///run/containerd/s/bc52d379a6b25a4e6f226ddbe5b8fc082e77822693e4234da7596a3dd9783315" protocol=ttrpc version=3 Dec 13 00:25:54.677453 systemd[1]: Started cri-containerd-c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf.scope - libcontainer container c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf. Dec 13 00:25:54.695294 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 13 00:25:54.695425 kernel: audit: type=1334 audit(1765585554.692:516): prog-id=150 op=LOAD Dec 13 00:25:54.692000 audit: BPF prog-id=150 op=LOAD Dec 13 00:25:54.695000 audit: BPF prog-id=151 op=LOAD Dec 13 00:25:54.695000 audit[3174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.706196 kernel: audit: type=1334 audit(1765585554.695:517): prog-id=151 op=LOAD Dec 13 00:25:54.706346 kernel: audit: type=1300 audit(1765585554.695:517): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.706414 kernel: audit: type=1327 audit(1765585554.695:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.695000 audit: BPF prog-id=151 op=UNLOAD Dec 13 00:25:54.714608 kernel: audit: type=1334 audit(1765585554.695:518): prog-id=151 op=UNLOAD Dec 13 00:25:54.714673 kernel: audit: type=1300 audit(1765585554.695:518): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.695000 audit[3174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.727662 kernel: audit: type=1327 audit(1765585554.695:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.727759 kernel: audit: type=1334 audit(1765585554.695:519): prog-id=152 op=LOAD Dec 13 00:25:54.695000 audit: BPF prog-id=152 op=LOAD Dec 13 00:25:54.729261 kernel: audit: type=1300 audit(1765585554.695:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.695000 audit[3174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.742871 kernel: audit: type=1327 audit(1765585554.695:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.696000 audit: BPF prog-id=153 op=LOAD Dec 13 00:25:54.696000 audit[3174]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.696000 audit: BPF prog-id=153 op=UNLOAD Dec 13 00:25:54.696000 audit[3174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.696000 audit: BPF prog-id=152 op=UNLOAD Dec 13 00:25:54.696000 audit[3174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.696000 audit: BPF prog-id=154 op=LOAD Dec 13 00:25:54.696000 audit[3174]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2939 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:25:54.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338616564326339663263393539663665636333313232646365396562 Dec 13 00:25:54.746700 containerd[1658]: time="2025-12-13T00:25:54.746642290Z" level=info msg="StartContainer for \"c8aed2c9f2c959f6ecc3122dce9ebade633738dc460e478e2ce63ee49d8d9aaf\" returns successfully" Dec 13 00:25:55.261248 kubelet[2837]: I1213 00:25:55.261167 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-dpdrf" podStartSLOduration=2.164751692 podStartE2EDuration="7.261148562s" podCreationTimestamp="2025-12-13 00:25:48 +0000 UTC" firstStartedPulling="2025-12-13 00:25:49.17988972 +0000 UTC m=+8.807425927" lastFinishedPulling="2025-12-13 00:25:54.2762866 +0000 UTC m=+13.903822797" observedRunningTime="2025-12-13 00:25:55.260538803 +0000 UTC m=+14.888075010" watchObservedRunningTime="2025-12-13 00:25:55.261148562 +0000 UTC m=+14.888684769" Dec 13 00:26:01.199736 sudo[1870]: pam_unix(sudo:session): session closed for user root Dec 13 00:26:01.198000 audit[1870]: USER_END pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.202445 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 00:26:01.202493 kernel: audit: type=1106 audit(1765585561.198:524): pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.204300 sshd[1869]: Connection closed by 10.0.0.1 port 37190 Dec 13 00:26:01.206628 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:01.212303 kernel: audit: type=1104 audit(1765585561.198:525): pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.198000 audit[1870]: CRED_DISP pid=1870 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.211000 audit[1865]: USER_END pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:01.216767 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:37190.service: Deactivated successfully. Dec 13 00:26:01.219561 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 00:26:01.220380 kernel: audit: type=1106 audit(1765585561.211:526): pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:01.219813 systemd[1]: session-8.scope: Consumed 5.781s CPU time, 193.4M memory peak. Dec 13 00:26:01.211000 audit[1865]: CRED_DISP pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:01.223308 systemd-logind[1630]: Session 8 logged out. Waiting for processes to exit. Dec 13 00:26:01.224515 systemd-logind[1630]: Removed session 8. Dec 13 00:26:01.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.109:22-10.0.0.1:37190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.229125 kernel: audit: type=1104 audit(1765585561.211:527): pid=1865 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:01.229171 kernel: audit: type=1131 audit(1765585561.215:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.109:22-10.0.0.1:37190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:01.865000 audit[3266]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:01.865000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa26eaaa0 a2=0 a3=7fffa26eaa8c items=0 ppid=2990 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:01.877252 kernel: audit: type=1325 audit(1765585561.865:529): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:01.877344 kernel: audit: type=1300 audit(1765585561.865:529): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa26eaaa0 a2=0 a3=7fffa26eaa8c items=0 ppid=2990 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:01.877365 kernel: audit: type=1327 audit(1765585561.865:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:01.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:01.880000 audit[3266]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:01.885256 kernel: audit: type=1325 audit(1765585561.880:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:01.880000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa26eaaa0 a2=0 a3=0 items=0 ppid=2990 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:01.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:01.892267 kernel: audit: type=1300 audit(1765585561.880:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa26eaaa0 a2=0 a3=0 items=0 ppid=2990 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:02.902000 audit[3268]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:02.902000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff2a5df9b0 a2=0 a3=7fff2a5df99c items=0 ppid=2990 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:02.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:02.906000 audit[3268]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:02.906000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2a5df9b0 a2=0 a3=0 items=0 ppid=2990 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:02.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:03.917000 audit[3270]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:03.917000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0c2c34f0 a2=0 a3=7ffe0c2c34dc items=0 ppid=2990 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:03.917000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:03.928000 audit[3270]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:03.928000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0c2c34f0 a2=0 a3=0 items=0 ppid=2990 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:03.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:05.932000 audit[3272]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:05.932000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdf8458700 a2=0 a3=7ffdf84586ec items=0 ppid=2990 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:05.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:05.938000 audit[3272]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:05.938000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf8458700 a2=0 a3=0 items=0 ppid=2990 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:05.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:06.427620 systemd[1]: Created slice kubepods-besteffort-pod01a03251_84d6_4cd2_8f43_79e52c439021.slice - libcontainer container kubepods-besteffort-pod01a03251_84d6_4cd2_8f43_79e52c439021.slice. Dec 13 00:26:06.518855 kubelet[2837]: I1213 00:26:06.518811 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01a03251-84d6-4cd2-8f43-79e52c439021-tigera-ca-bundle\") pod \"calico-typha-598bcd6df-rdsqd\" (UID: \"01a03251-84d6-4cd2-8f43-79e52c439021\") " pod="calico-system/calico-typha-598bcd6df-rdsqd" Dec 13 00:26:06.518855 kubelet[2837]: I1213 00:26:06.518854 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/01a03251-84d6-4cd2-8f43-79e52c439021-typha-certs\") pod \"calico-typha-598bcd6df-rdsqd\" (UID: \"01a03251-84d6-4cd2-8f43-79e52c439021\") " pod="calico-system/calico-typha-598bcd6df-rdsqd" Dec 13 00:26:06.519363 kubelet[2837]: I1213 00:26:06.518878 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqjc\" (UniqueName: \"kubernetes.io/projected/01a03251-84d6-4cd2-8f43-79e52c439021-kube-api-access-mxqjc\") pod \"calico-typha-598bcd6df-rdsqd\" (UID: \"01a03251-84d6-4cd2-8f43-79e52c439021\") " pod="calico-system/calico-typha-598bcd6df-rdsqd" Dec 13 00:26:06.651125 systemd[1]: Created slice kubepods-besteffort-pod680c671f_756a_4c01_88b5_06e3f530f27a.slice - libcontainer container kubepods-besteffort-pod680c671f_756a_4c01_88b5_06e3f530f27a.slice. Dec 13 00:26:06.720752 kubelet[2837]: I1213 00:26:06.720646 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/680c671f-756a-4c01-88b5-06e3f530f27a-tigera-ca-bundle\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720752 kubelet[2837]: I1213 00:26:06.720684 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-var-run-calico\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720752 kubelet[2837]: I1213 00:26:06.720700 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-cni-log-dir\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720752 kubelet[2837]: I1213 00:26:06.720715 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-policysync\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720752 kubelet[2837]: I1213 00:26:06.720729 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-xtables-lock\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720938 kubelet[2837]: I1213 00:26:06.720743 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgq98\" (UniqueName: \"kubernetes.io/projected/680c671f-756a-4c01-88b5-06e3f530f27a-kube-api-access-sgq98\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720938 kubelet[2837]: I1213 00:26:06.720759 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/680c671f-756a-4c01-88b5-06e3f530f27a-node-certs\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720938 kubelet[2837]: I1213 00:26:06.720772 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-var-lib-calico\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720938 kubelet[2837]: I1213 00:26:06.720803 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-flexvol-driver-host\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.720938 kubelet[2837]: I1213 00:26:06.720817 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-lib-modules\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.721043 kubelet[2837]: I1213 00:26:06.720836 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-cni-bin-dir\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.721043 kubelet[2837]: I1213 00:26:06.720848 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/680c671f-756a-4c01-88b5-06e3f530f27a-cni-net-dir\") pod \"calico-node-wl4h9\" (UID: \"680c671f-756a-4c01-88b5-06e3f530f27a\") " pod="calico-system/calico-node-wl4h9" Dec 13 00:26:06.833069 kubelet[2837]: E1213 00:26:06.831532 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:06.834203 containerd[1658]: time="2025-12-13T00:26:06.834170376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598bcd6df-rdsqd,Uid:01a03251-84d6-4cd2-8f43-79e52c439021,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:06.844270 kubelet[2837]: E1213 00:26:06.843167 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.844270 kubelet[2837]: W1213 00:26:06.843207 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.844270 kubelet[2837]: E1213 00:26:06.843280 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.873543 containerd[1658]: time="2025-12-13T00:26:06.873459955Z" level=info msg="connecting to shim 591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818" address="unix:///run/containerd/s/de9f75f6adadd264de6626bcc61089714ad55544bbd40e0f5d823fc09d5b24f7" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:06.879486 kubelet[2837]: E1213 00:26:06.879412 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:06.886170 kubelet[2837]: E1213 00:26:06.886117 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.886170 kubelet[2837]: W1213 00:26:06.886152 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.886170 kubelet[2837]: E1213 00:26:06.886175 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.886930 kubelet[2837]: E1213 00:26:06.886911 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.886930 kubelet[2837]: W1213 00:26:06.886925 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.887014 kubelet[2837]: E1213 00:26:06.886937 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.887914 kubelet[2837]: E1213 00:26:06.887890 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.887914 kubelet[2837]: W1213 00:26:06.887910 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.888002 kubelet[2837]: E1213 00:26:06.887923 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.888376 kubelet[2837]: E1213 00:26:06.888253 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.888376 kubelet[2837]: W1213 00:26:06.888285 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.888376 kubelet[2837]: E1213 00:26:06.888297 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.888737 kubelet[2837]: E1213 00:26:06.888567 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.888737 kubelet[2837]: W1213 00:26:06.888586 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.888737 kubelet[2837]: E1213 00:26:06.888596 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.888848 kubelet[2837]: E1213 00:26:06.888824 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.888848 kubelet[2837]: W1213 00:26:06.888835 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.888848 kubelet[2837]: E1213 00:26:06.888848 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.889274 kubelet[2837]: E1213 00:26:06.889079 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.889274 kubelet[2837]: W1213 00:26:06.889096 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.889274 kubelet[2837]: E1213 00:26:06.889105 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.889484 kubelet[2837]: E1213 00:26:06.889350 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.889484 kubelet[2837]: W1213 00:26:06.889360 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.889484 kubelet[2837]: E1213 00:26:06.889370 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.889640 kubelet[2837]: E1213 00:26:06.889619 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.889640 kubelet[2837]: W1213 00:26:06.889633 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.889709 kubelet[2837]: E1213 00:26:06.889644 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.889925 kubelet[2837]: E1213 00:26:06.889906 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.889925 kubelet[2837]: W1213 00:26:06.889920 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.890009 kubelet[2837]: E1213 00:26:06.889931 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.890163 kubelet[2837]: E1213 00:26:06.890145 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.890163 kubelet[2837]: W1213 00:26:06.890158 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.890316 kubelet[2837]: E1213 00:26:06.890169 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.890500 kubelet[2837]: E1213 00:26:06.890480 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.890500 kubelet[2837]: W1213 00:26:06.890495 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.890621 kubelet[2837]: E1213 00:26:06.890507 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.890752 kubelet[2837]: E1213 00:26:06.890732 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.890752 kubelet[2837]: W1213 00:26:06.890748 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.890923 kubelet[2837]: E1213 00:26:06.890758 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.890999 kubelet[2837]: E1213 00:26:06.890954 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.890999 kubelet[2837]: W1213 00:26:06.890964 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.890999 kubelet[2837]: E1213 00:26:06.890974 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.891323 kubelet[2837]: E1213 00:26:06.891159 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.891323 kubelet[2837]: W1213 00:26:06.891280 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.891323 kubelet[2837]: E1213 00:26:06.891293 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.894208 kubelet[2837]: E1213 00:26:06.894178 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.894208 kubelet[2837]: W1213 00:26:06.894199 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.894208 kubelet[2837]: E1213 00:26:06.894215 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.896108 kubelet[2837]: E1213 00:26:06.896005 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.896108 kubelet[2837]: W1213 00:26:06.896036 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.896108 kubelet[2837]: E1213 00:26:06.896053 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.897097 kubelet[2837]: E1213 00:26:06.896963 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.897097 kubelet[2837]: W1213 00:26:06.896990 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.897097 kubelet[2837]: E1213 00:26:06.897020 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.897475 kubelet[2837]: E1213 00:26:06.897461 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.897705 kubelet[2837]: W1213 00:26:06.897628 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.897705 kubelet[2837]: E1213 00:26:06.897647 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.898098 kubelet[2837]: E1213 00:26:06.897972 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.898098 kubelet[2837]: W1213 00:26:06.897988 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.898098 kubelet[2837]: E1213 00:26:06.898001 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.912503 systemd[1]: Started cri-containerd-591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818.scope - libcontainer container 591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818. Dec 13 00:26:06.924545 kubelet[2837]: E1213 00:26:06.924459 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.924545 kubelet[2837]: W1213 00:26:06.924484 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.924545 kubelet[2837]: E1213 00:26:06.924504 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.925136 kubelet[2837]: I1213 00:26:06.924833 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c357da7-f81d-4093-8d71-96d21eb95cdd-kubelet-dir\") pod \"csi-node-driver-8pkv7\" (UID: \"7c357da7-f81d-4093-8d71-96d21eb95cdd\") " pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:06.925597 kubelet[2837]: E1213 00:26:06.925576 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.925597 kubelet[2837]: W1213 00:26:06.925593 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.925676 kubelet[2837]: E1213 00:26:06.925607 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.926414 kubelet[2837]: E1213 00:26:06.926391 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.926414 kubelet[2837]: W1213 00:26:06.926408 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.926666 kubelet[2837]: E1213 00:26:06.926419 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.926786 kubelet[2837]: E1213 00:26:06.926697 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.926786 kubelet[2837]: W1213 00:26:06.926711 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.926786 kubelet[2837]: E1213 00:26:06.926720 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.926786 kubelet[2837]: I1213 00:26:06.926749 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c357da7-f81d-4093-8d71-96d21eb95cdd-varrun\") pod \"csi-node-driver-8pkv7\" (UID: \"7c357da7-f81d-4093-8d71-96d21eb95cdd\") " pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:06.927067 kubelet[2837]: E1213 00:26:06.927002 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.927067 kubelet[2837]: W1213 00:26:06.927039 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.927067 kubelet[2837]: E1213 00:26:06.927049 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.927067 kubelet[2837]: I1213 00:26:06.927070 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh74j\" (UniqueName: \"kubernetes.io/projected/7c357da7-f81d-4093-8d71-96d21eb95cdd-kube-api-access-kh74j\") pod \"csi-node-driver-8pkv7\" (UID: \"7c357da7-f81d-4093-8d71-96d21eb95cdd\") " pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:06.927421 kubelet[2837]: E1213 00:26:06.927390 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.927692 kubelet[2837]: W1213 00:26:06.927408 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.927692 kubelet[2837]: E1213 00:26:06.927628 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.927692 kubelet[2837]: I1213 00:26:06.927669 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c357da7-f81d-4093-8d71-96d21eb95cdd-registration-dir\") pod \"csi-node-driver-8pkv7\" (UID: \"7c357da7-f81d-4093-8d71-96d21eb95cdd\") " pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:06.928327 kubelet[2837]: E1213 00:26:06.928291 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.928327 kubelet[2837]: W1213 00:26:06.928309 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.928327 kubelet[2837]: E1213 00:26:06.928322 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.928682 kubelet[2837]: E1213 00:26:06.928571 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.928682 kubelet[2837]: W1213 00:26:06.928587 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.928682 kubelet[2837]: E1213 00:26:06.928597 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.928863 kubelet[2837]: E1213 00:26:06.928844 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.928863 kubelet[2837]: W1213 00:26:06.928859 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.928932 kubelet[2837]: E1213 00:26:06.928870 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.929220 kubelet[2837]: E1213 00:26:06.929201 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.929220 kubelet[2837]: W1213 00:26:06.929216 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.929410 kubelet[2837]: E1213 00:26:06.929226 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.929621 kubelet[2837]: E1213 00:26:06.929512 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.929621 kubelet[2837]: W1213 00:26:06.929527 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.929621 kubelet[2837]: E1213 00:26:06.929538 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.929621 kubelet[2837]: I1213 00:26:06.929574 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c357da7-f81d-4093-8d71-96d21eb95cdd-socket-dir\") pod \"csi-node-driver-8pkv7\" (UID: \"7c357da7-f81d-4093-8d71-96d21eb95cdd\") " pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:06.929900 kubelet[2837]: E1213 00:26:06.929801 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.929900 kubelet[2837]: W1213 00:26:06.929819 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.929900 kubelet[2837]: E1213 00:26:06.929830 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.930225 kubelet[2837]: E1213 00:26:06.930139 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.930225 kubelet[2837]: W1213 00:26:06.930154 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.930225 kubelet[2837]: E1213 00:26:06.930163 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.930480 kubelet[2837]: E1213 00:26:06.930449 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.930480 kubelet[2837]: W1213 00:26:06.930463 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.930480 kubelet[2837]: E1213 00:26:06.930472 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.930766 kubelet[2837]: E1213 00:26:06.930730 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:06.930766 kubelet[2837]: W1213 00:26:06.930742 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:06.930766 kubelet[2837]: E1213 00:26:06.930752 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:06.948282 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 00:26:06.948406 kernel: audit: type=1334 audit(1765585566.944:537): prog-id=155 op=LOAD Dec 13 00:26:06.944000 audit: BPF prog-id=155 op=LOAD Dec 13 00:26:06.944000 audit: BPF prog-id=156 op=LOAD Dec 13 00:26:06.944000 audit[3316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.955869 kernel: audit: type=1334 audit(1765585566.944:538): prog-id=156 op=LOAD Dec 13 00:26:06.955910 kernel: audit: type=1300 audit(1765585566.944:538): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.961187 kernel: audit: type=1327 audit(1765585566.944:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.944000 audit: BPF prog-id=156 op=UNLOAD Dec 13 00:26:06.967954 kernel: audit: type=1334 audit(1765585566.944:539): prog-id=156 op=UNLOAD Dec 13 00:26:06.968004 kernel: audit: type=1300 audit(1765585566.944:539): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.944000 audit[3316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.973165 kernel: audit: type=1327 audit(1765585566.944:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.976507 kubelet[2837]: E1213 00:26:06.975065 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:06.944000 audit: BPF prog-id=157 op=LOAD Dec 13 00:26:06.978314 containerd[1658]: time="2025-12-13T00:26:06.977894677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl4h9,Uid:680c671f-756a-4c01-88b5-06e3f530f27a,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:06.944000 audit[3316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.985105 kernel: audit: type=1334 audit(1765585566.944:540): prog-id=157 op=LOAD Dec 13 00:26:06.985162 kernel: audit: type=1300 audit(1765585566.944:540): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.986274 kernel: audit: type=1327 audit(1765585566.944:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.945000 audit: BPF prog-id=158 op=LOAD Dec 13 00:26:06.945000 audit[3316]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.945000 audit: BPF prog-id=158 op=UNLOAD Dec 13 00:26:06.945000 audit[3316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.945000 audit: BPF prog-id=157 op=UNLOAD Dec 13 00:26:06.945000 audit[3316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.945000 audit: BPF prog-id=159 op=LOAD Dec 13 00:26:06.945000 audit[3316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3300 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539316333323363326464313639363335643934303532326562666530 Dec 13 00:26:06.967000 audit[3376]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:06.967000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc88cf0690 a2=0 a3=7ffc88cf067c items=0 ppid=2990 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:06.979000 audit[3376]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:06.979000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc88cf0690 a2=0 a3=0 items=0 ppid=2990 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:06.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:07.000897 containerd[1658]: time="2025-12-13T00:26:07.000844969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598bcd6df-rdsqd,Uid:01a03251-84d6-4cd2-8f43-79e52c439021,Namespace:calico-system,Attempt:0,} returns sandbox id \"591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818\"" Dec 13 00:26:07.001657 kubelet[2837]: E1213 00:26:07.001615 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:07.003280 containerd[1658]: time="2025-12-13T00:26:07.003256902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 00:26:07.013708 containerd[1658]: time="2025-12-13T00:26:07.013659159Z" level=info msg="connecting to shim 25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74" address="unix:///run/containerd/s/e03de88eeafe4266c01c428ece5ad8fc0080aba9f8e99c77ead0f584bd120a27" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:07.032569 kubelet[2837]: E1213 00:26:07.032535 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.032569 kubelet[2837]: W1213 00:26:07.032560 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.032846 kubelet[2837]: E1213 00:26:07.032581 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.032846 kubelet[2837]: E1213 00:26:07.032815 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.033049 kubelet[2837]: W1213 00:26:07.032964 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.033049 kubelet[2837]: E1213 00:26:07.032981 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.033316 kubelet[2837]: E1213 00:26:07.033203 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.033316 kubelet[2837]: W1213 00:26:07.033215 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.033316 kubelet[2837]: E1213 00:26:07.033223 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.033705 kubelet[2837]: E1213 00:26:07.033675 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.033990 kubelet[2837]: W1213 00:26:07.033864 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.033990 kubelet[2837]: E1213 00:26:07.033888 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.034331 kubelet[2837]: E1213 00:26:07.034291 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.034331 kubelet[2837]: W1213 00:26:07.034303 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.034473 kubelet[2837]: E1213 00:26:07.034316 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.034988 kubelet[2837]: E1213 00:26:07.034973 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.035136 kubelet[2837]: W1213 00:26:07.035042 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.035136 kubelet[2837]: E1213 00:26:07.035055 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.035385 kubelet[2837]: E1213 00:26:07.035348 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.035385 kubelet[2837]: W1213 00:26:07.035357 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.035529 kubelet[2837]: E1213 00:26:07.035366 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.035752 kubelet[2837]: E1213 00:26:07.035721 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.035752 kubelet[2837]: W1213 00:26:07.035731 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.035752 kubelet[2837]: E1213 00:26:07.035740 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.036049 kubelet[2837]: E1213 00:26:07.036018 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.036049 kubelet[2837]: W1213 00:26:07.036028 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.036049 kubelet[2837]: E1213 00:26:07.036036 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.036582 kubelet[2837]: E1213 00:26:07.036551 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.036582 kubelet[2837]: W1213 00:26:07.036561 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.036582 kubelet[2837]: E1213 00:26:07.036570 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.036843 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.038466 kubelet[2837]: W1213 00:26:07.036850 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.036859 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.037087 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.038466 kubelet[2837]: W1213 00:26:07.037094 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.037102 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.037306 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.038466 kubelet[2837]: W1213 00:26:07.037313 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.037321 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.038466 kubelet[2837]: E1213 00:26:07.037551 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.038400 systemd[1]: Started cri-containerd-25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74.scope - libcontainer container 25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74. Dec 13 00:26:07.042858 kubelet[2837]: W1213 00:26:07.037558 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.037565 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.037745 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.042858 kubelet[2837]: W1213 00:26:07.037752 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.037759 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.037943 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.042858 kubelet[2837]: W1213 00:26:07.037950 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.037957 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.042858 kubelet[2837]: E1213 00:26:07.038137 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.042858 kubelet[2837]: W1213 00:26:07.038144 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.038153 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.038704 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043082 kubelet[2837]: W1213 00:26:07.038713 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.038722 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.038993 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043082 kubelet[2837]: W1213 00:26:07.039000 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.039009 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.039305 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043082 kubelet[2837]: W1213 00:26:07.039312 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043082 kubelet[2837]: E1213 00:26:07.039353 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.039581 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043447 kubelet[2837]: W1213 00:26:07.039589 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.039597 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.039813 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043447 kubelet[2837]: W1213 00:26:07.039821 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.039830 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.040057 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043447 kubelet[2837]: W1213 00:26:07.040066 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.040074 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043447 kubelet[2837]: E1213 00:26:07.040326 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043664 kubelet[2837]: W1213 00:26:07.040335 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043664 kubelet[2837]: E1213 00:26:07.040357 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.043664 kubelet[2837]: E1213 00:26:07.040931 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.043664 kubelet[2837]: W1213 00:26:07.040941 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.043664 kubelet[2837]: E1213 00:26:07.040952 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.051000 audit: BPF prog-id=160 op=LOAD Dec 13 00:26:07.052000 audit: BPF prog-id=161 op=LOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=161 op=UNLOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=162 op=LOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=163 op=LOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=163 op=UNLOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=162 op=UNLOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.052000 audit: BPF prog-id=164 op=LOAD Dec 13 00:26:07.052000 audit[3406]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3392 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:07.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235636139643931636563636239626664616337653034333233383736 Dec 13 00:26:07.120252 kubelet[2837]: E1213 00:26:07.120206 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:07.120252 kubelet[2837]: W1213 00:26:07.120226 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:07.120252 kubelet[2837]: E1213 00:26:07.120260 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:07.146332 containerd[1658]: time="2025-12-13T00:26:07.146277575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wl4h9,Uid:680c671f-756a-4c01-88b5-06e3f530f27a,Namespace:calico-system,Attempt:0,} returns sandbox id \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\"" Dec 13 00:26:07.151869 kubelet[2837]: E1213 00:26:07.151842 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:08.464660 kubelet[2837]: E1213 00:26:08.464626 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:08.477029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489403444.mount: Deactivated successfully. Dec 13 00:26:09.284033 containerd[1658]: time="2025-12-13T00:26:09.283989567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.285499 containerd[1658]: time="2025-12-13T00:26:09.284903184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 13 00:26:09.286268 containerd[1658]: time="2025-12-13T00:26:09.286224497Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.289164 containerd[1658]: time="2025-12-13T00:26:09.289118483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:09.289624 containerd[1658]: time="2025-12-13T00:26:09.289590280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.286181673s" Dec 13 00:26:09.289624 containerd[1658]: time="2025-12-13T00:26:09.289614796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 13 00:26:09.290321 containerd[1658]: time="2025-12-13T00:26:09.290292469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 00:26:09.304121 containerd[1658]: time="2025-12-13T00:26:09.304061576Z" level=info msg="CreateContainer within sandbox \"591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 00:26:09.312851 containerd[1658]: time="2025-12-13T00:26:09.312802077Z" level=info msg="Container 3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:09.320765 containerd[1658]: time="2025-12-13T00:26:09.320722226Z" level=info msg="CreateContainer within sandbox \"591c323c2dd169635d940522ebfe0832e82950fc87fce78a581c06e647f54818\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f\"" Dec 13 00:26:09.321358 containerd[1658]: time="2025-12-13T00:26:09.321325469Z" level=info msg="StartContainer for \"3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f\"" Dec 13 00:26:09.322450 containerd[1658]: time="2025-12-13T00:26:09.322418102Z" level=info msg="connecting to shim 3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f" address="unix:///run/containerd/s/de9f75f6adadd264de6626bcc61089714ad55544bbd40e0f5d823fc09d5b24f7" protocol=ttrpc version=3 Dec 13 00:26:09.348594 systemd[1]: Started cri-containerd-3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f.scope - libcontainer container 3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f. Dec 13 00:26:09.365000 audit: BPF prog-id=165 op=LOAD Dec 13 00:26:09.366000 audit: BPF prog-id=166 op=LOAD Dec 13 00:26:09.366000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=166 op=UNLOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=167 op=LOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=168 op=LOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=168 op=UNLOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=167 op=UNLOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.367000 audit: BPF prog-id=169 op=LOAD Dec 13 00:26:09.367000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3300 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:09.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332333465363265666661373565656635376532366333336366353130 Dec 13 00:26:09.537662 containerd[1658]: time="2025-12-13T00:26:09.537519966Z" level=info msg="StartContainer for \"3234e62effa75eef57e26c33cf510e4e0e84845e7fde3324e19e69b6d7d2483f\" returns successfully" Dec 13 00:26:10.290512 kubelet[2837]: E1213 00:26:10.290477 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:10.317978 kubelet[2837]: E1213 00:26:10.317939 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.317978 kubelet[2837]: W1213 00:26:10.317966 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.317978 kubelet[2837]: E1213 00:26:10.317991 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.318298 kubelet[2837]: E1213 00:26:10.318279 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.318298 kubelet[2837]: W1213 00:26:10.318292 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.318468 kubelet[2837]: E1213 00:26:10.318303 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.318539 kubelet[2837]: E1213 00:26:10.318521 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.318539 kubelet[2837]: W1213 00:26:10.318530 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.318663 kubelet[2837]: E1213 00:26:10.318541 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.318794 kubelet[2837]: E1213 00:26:10.318772 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.318794 kubelet[2837]: W1213 00:26:10.318784 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.318984 kubelet[2837]: E1213 00:26:10.318794 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.319075 kubelet[2837]: E1213 00:26:10.318985 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.319075 kubelet[2837]: W1213 00:26:10.318994 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.319075 kubelet[2837]: E1213 00:26:10.319005 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.319397 kubelet[2837]: E1213 00:26:10.319180 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.319397 kubelet[2837]: W1213 00:26:10.319189 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.319397 kubelet[2837]: E1213 00:26:10.319198 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.319717 kubelet[2837]: E1213 00:26:10.319414 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.319717 kubelet[2837]: W1213 00:26:10.319423 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.319717 kubelet[2837]: E1213 00:26:10.319433 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.319717 kubelet[2837]: E1213 00:26:10.319620 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.319717 kubelet[2837]: W1213 00:26:10.319629 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.319717 kubelet[2837]: E1213 00:26:10.319640 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320166 kubelet[2837]: E1213 00:26:10.319827 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320166 kubelet[2837]: W1213 00:26:10.319836 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320166 kubelet[2837]: E1213 00:26:10.319846 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320166 kubelet[2837]: E1213 00:26:10.320022 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320166 kubelet[2837]: W1213 00:26:10.320032 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320166 kubelet[2837]: E1213 00:26:10.320043 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320449 kubelet[2837]: E1213 00:26:10.320219 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320449 kubelet[2837]: W1213 00:26:10.320241 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320449 kubelet[2837]: E1213 00:26:10.320252 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320449 kubelet[2837]: E1213 00:26:10.320448 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320608 kubelet[2837]: W1213 00:26:10.320457 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320608 kubelet[2837]: E1213 00:26:10.320467 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320719 kubelet[2837]: E1213 00:26:10.320668 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320719 kubelet[2837]: W1213 00:26:10.320677 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320719 kubelet[2837]: E1213 00:26:10.320687 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.320878 kubelet[2837]: E1213 00:26:10.320871 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.320940 kubelet[2837]: W1213 00:26:10.320882 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.320940 kubelet[2837]: E1213 00:26:10.320893 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.321111 kubelet[2837]: E1213 00:26:10.321097 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.321111 kubelet[2837]: W1213 00:26:10.321108 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.321189 kubelet[2837]: E1213 00:26:10.321118 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.382862 kubelet[2837]: E1213 00:26:10.382821 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.382862 kubelet[2837]: W1213 00:26:10.382847 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.382862 kubelet[2837]: E1213 00:26:10.382868 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.383162 kubelet[2837]: E1213 00:26:10.383143 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.383162 kubelet[2837]: W1213 00:26:10.383157 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.383216 kubelet[2837]: E1213 00:26:10.383168 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.383520 kubelet[2837]: E1213 00:26:10.383502 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.383520 kubelet[2837]: W1213 00:26:10.383514 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.383609 kubelet[2837]: E1213 00:26:10.383534 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.383832 kubelet[2837]: E1213 00:26:10.383810 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.383832 kubelet[2837]: W1213 00:26:10.383823 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.383832 kubelet[2837]: E1213 00:26:10.383834 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.384080 kubelet[2837]: E1213 00:26:10.384062 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.384080 kubelet[2837]: W1213 00:26:10.384075 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.384148 kubelet[2837]: E1213 00:26:10.384085 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.384320 kubelet[2837]: E1213 00:26:10.384303 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.384320 kubelet[2837]: W1213 00:26:10.384316 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.384397 kubelet[2837]: E1213 00:26:10.384326 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.384654 kubelet[2837]: E1213 00:26:10.384625 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.384654 kubelet[2837]: W1213 00:26:10.384640 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.384654 kubelet[2837]: E1213 00:26:10.384651 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.384894 kubelet[2837]: E1213 00:26:10.384868 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.384894 kubelet[2837]: W1213 00:26:10.384882 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.384894 kubelet[2837]: E1213 00:26:10.384892 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.385146 kubelet[2837]: E1213 00:26:10.385127 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.385146 kubelet[2837]: W1213 00:26:10.385141 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.385219 kubelet[2837]: E1213 00:26:10.385152 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.385431 kubelet[2837]: E1213 00:26:10.385403 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.385431 kubelet[2837]: W1213 00:26:10.385419 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.385431 kubelet[2837]: E1213 00:26:10.385429 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.385644 kubelet[2837]: E1213 00:26:10.385628 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.385644 kubelet[2837]: W1213 00:26:10.385640 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.385710 kubelet[2837]: E1213 00:26:10.385650 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.385923 kubelet[2837]: E1213 00:26:10.385884 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.385923 kubelet[2837]: W1213 00:26:10.385900 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.385923 kubelet[2837]: E1213 00:26:10.385909 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.386292 kubelet[2837]: E1213 00:26:10.386261 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.386292 kubelet[2837]: W1213 00:26:10.386287 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.386452 kubelet[2837]: E1213 00:26:10.386307 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.386522 kubelet[2837]: E1213 00:26:10.386506 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.386522 kubelet[2837]: W1213 00:26:10.386516 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.386522 kubelet[2837]: E1213 00:26:10.386524 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.386729 kubelet[2837]: E1213 00:26:10.386713 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.386729 kubelet[2837]: W1213 00:26:10.386724 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.386791 kubelet[2837]: E1213 00:26:10.386731 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.386931 kubelet[2837]: E1213 00:26:10.386914 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.386931 kubelet[2837]: W1213 00:26:10.386926 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.386982 kubelet[2837]: E1213 00:26:10.386933 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.387179 kubelet[2837]: E1213 00:26:10.387163 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.387179 kubelet[2837]: W1213 00:26:10.387173 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.387262 kubelet[2837]: E1213 00:26:10.387183 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.387546 kubelet[2837]: E1213 00:26:10.387519 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:26:10.387546 kubelet[2837]: W1213 00:26:10.387530 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:26:10.387546 kubelet[2837]: E1213 00:26:10.387538 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:26:10.461339 kubelet[2837]: E1213 00:26:10.461274 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:10.806444 containerd[1658]: time="2025-12-13T00:26:10.806392997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:10.806979 containerd[1658]: time="2025-12-13T00:26:10.806949222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:10.808164 containerd[1658]: time="2025-12-13T00:26:10.808113139Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:10.810921 containerd[1658]: time="2025-12-13T00:26:10.810860249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:10.811321 containerd[1658]: time="2025-12-13T00:26:10.811281560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.520960719s" Dec 13 00:26:10.811321 containerd[1658]: time="2025-12-13T00:26:10.811308732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 13 00:26:10.817419 containerd[1658]: time="2025-12-13T00:26:10.817377791Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 00:26:10.826285 containerd[1658]: time="2025-12-13T00:26:10.826219942Z" level=info msg="Container 996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:10.834589 containerd[1658]: time="2025-12-13T00:26:10.834532286Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58\"" Dec 13 00:26:10.835083 containerd[1658]: time="2025-12-13T00:26:10.835048315Z" level=info msg="StartContainer for \"996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58\"" Dec 13 00:26:10.836878 containerd[1658]: time="2025-12-13T00:26:10.836849288Z" level=info msg="connecting to shim 996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58" address="unix:///run/containerd/s/e03de88eeafe4266c01c428ece5ad8fc0080aba9f8e99c77ead0f584bd120a27" protocol=ttrpc version=3 Dec 13 00:26:10.858473 systemd[1]: Started cri-containerd-996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58.scope - libcontainer container 996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58. Dec 13 00:26:10.919000 audit: BPF prog-id=170 op=LOAD Dec 13 00:26:10.919000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939366630656235663933376339366566653138306339613832393566 Dec 13 00:26:10.919000 audit: BPF prog-id=171 op=LOAD Dec 13 00:26:10.919000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939366630656235663933376339366566653138306339613832393566 Dec 13 00:26:10.919000 audit: BPF prog-id=171 op=UNLOAD Dec 13 00:26:10.919000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939366630656235663933376339366566653138306339613832393566 Dec 13 00:26:10.919000 audit: BPF prog-id=170 op=UNLOAD Dec 13 00:26:10.919000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939366630656235663933376339366566653138306339613832393566 Dec 13 00:26:10.919000 audit: BPF prog-id=172 op=LOAD Dec 13 00:26:10.919000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3392 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:10.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939366630656235663933376339366566653138306339613832393566 Dec 13 00:26:10.953625 containerd[1658]: time="2025-12-13T00:26:10.953313320Z" level=info msg="StartContainer for \"996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58\" returns successfully" Dec 13 00:26:10.958697 systemd[1]: cri-containerd-996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58.scope: Deactivated successfully. Dec 13 00:26:10.961990 containerd[1658]: time="2025-12-13T00:26:10.961945676Z" level=info msg="received container exit event container_id:\"996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58\" id:\"996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58\" pid:3557 exited_at:{seconds:1765585570 nanos:961522281}" Dec 13 00:26:10.961000 audit: BPF prog-id=172 op=UNLOAD Dec 13 00:26:10.985949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996f0eb5f937c96efe180c9a8295f0e0a5611685ba082127b1d463d575198d58-rootfs.mount: Deactivated successfully. Dec 13 00:26:11.294132 kubelet[2837]: I1213 00:26:11.294074 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:26:11.294895 kubelet[2837]: E1213 00:26:11.294504 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:11.295269 kubelet[2837]: E1213 00:26:11.295154 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:11.364024 kubelet[2837]: I1213 00:26:11.363852 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-598bcd6df-rdsqd" podStartSLOduration=4.076532999 podStartE2EDuration="6.363829988s" podCreationTimestamp="2025-12-13 00:26:05 +0000 UTC" firstStartedPulling="2025-12-13 00:26:07.002876927 +0000 UTC m=+26.630413134" lastFinishedPulling="2025-12-13 00:26:09.290173916 +0000 UTC m=+28.917710123" observedRunningTime="2025-12-13 00:26:10.307818276 +0000 UTC m=+29.935354483" watchObservedRunningTime="2025-12-13 00:26:11.363829988 +0000 UTC m=+30.991366195" Dec 13 00:26:12.298513 kubelet[2837]: E1213 00:26:12.298469 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:12.299606 containerd[1658]: time="2025-12-13T00:26:12.299531208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 00:26:12.462045 kubelet[2837]: E1213 00:26:12.461448 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:14.461204 kubelet[2837]: E1213 00:26:14.461152 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:15.990840 containerd[1658]: time="2025-12-13T00:26:15.990779512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.991739 containerd[1658]: time="2025-12-13T00:26:15.991710149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 13 00:26:15.993002 containerd[1658]: time="2025-12-13T00:26:15.992943535Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.995163 containerd[1658]: time="2025-12-13T00:26:15.995121434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:15.995692 containerd[1658]: time="2025-12-13T00:26:15.995644928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.696067463s" Dec 13 00:26:15.995692 containerd[1658]: time="2025-12-13T00:26:15.995674553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 13 00:26:16.000605 containerd[1658]: time="2025-12-13T00:26:16.000577319Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 00:26:16.010702 containerd[1658]: time="2025-12-13T00:26:16.010647034Z" level=info msg="Container 5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:16.019586 containerd[1658]: time="2025-12-13T00:26:16.019531043Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d\"" Dec 13 00:26:16.020420 containerd[1658]: time="2025-12-13T00:26:16.020359889Z" level=info msg="StartContainer for \"5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d\"" Dec 13 00:26:16.021779 containerd[1658]: time="2025-12-13T00:26:16.021731654Z" level=info msg="connecting to shim 5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d" address="unix:///run/containerd/s/e03de88eeafe4266c01c428ece5ad8fc0080aba9f8e99c77ead0f584bd120a27" protocol=ttrpc version=3 Dec 13 00:26:16.046417 systemd[1]: Started cri-containerd-5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d.scope - libcontainer container 5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d. Dec 13 00:26:16.105000 audit: BPF prog-id=173 op=LOAD Dec 13 00:26:16.107734 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 13 00:26:16.107803 kernel: audit: type=1334 audit(1765585576.105:569): prog-id=173 op=LOAD Dec 13 00:26:16.105000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.114058 kernel: audit: type=1300 audit(1765585576.105:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.114104 kernel: audit: type=1327 audit(1765585576.105:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.105000 audit: BPF prog-id=174 op=LOAD Dec 13 00:26:16.120274 kernel: audit: type=1334 audit(1765585576.105:570): prog-id=174 op=LOAD Dec 13 00:26:16.120319 kernel: audit: type=1300 audit(1765585576.105:570): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.105000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.130163 kernel: audit: type=1327 audit(1765585576.105:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.130290 kernel: audit: type=1334 audit(1765585576.105:571): prog-id=174 op=UNLOAD Dec 13 00:26:16.105000 audit: BPF prog-id=174 op=UNLOAD Dec 13 00:26:16.105000 audit[3604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.136322 kernel: audit: type=1300 audit(1765585576.105:571): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.137293 kernel: audit: type=1327 audit(1765585576.105:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.105000 audit: BPF prog-id=173 op=UNLOAD Dec 13 00:26:16.143295 kernel: audit: type=1334 audit(1765585576.105:572): prog-id=173 op=UNLOAD Dec 13 00:26:16.105000 audit[3604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.105000 audit: BPF prog-id=175 op=LOAD Dec 13 00:26:16.105000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3392 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:16.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562656663623161386661346434653731663733303736353437613339 Dec 13 00:26:16.147447 containerd[1658]: time="2025-12-13T00:26:16.147402887Z" level=info msg="StartContainer for \"5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d\" returns successfully" Dec 13 00:26:16.311361 kubelet[2837]: E1213 00:26:16.311171 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:16.460802 kubelet[2837]: E1213 00:26:16.460716 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:17.312305 kubelet[2837]: E1213 00:26:17.312267 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:17.806248 systemd[1]: cri-containerd-5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d.scope: Deactivated successfully. Dec 13 00:26:17.806986 systemd[1]: cri-containerd-5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d.scope: Consumed 666ms CPU time, 177.3M memory peak, 4M read from disk, 171.3M written to disk. Dec 13 00:26:17.809596 containerd[1658]: time="2025-12-13T00:26:17.809551993Z" level=info msg="received container exit event container_id:\"5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d\" id:\"5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d\" pid:3616 exited_at:{seconds:1765585577 nanos:809339033}" Dec 13 00:26:17.815000 audit: BPF prog-id=175 op=UNLOAD Dec 13 00:26:17.835728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5befcb1a8fa4d4e71f73076547a39f312859d32ab5128f3122c91c1d2e33cd7d-rootfs.mount: Deactivated successfully. Dec 13 00:26:17.905974 kubelet[2837]: I1213 00:26:17.905921 2837 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 13 00:26:18.562285 systemd[1]: Created slice kubepods-burstable-pod87f70cee_6c2d_4ba1_82db_48c2cd8f9534.slice - libcontainer container kubepods-burstable-pod87f70cee_6c2d_4ba1_82db_48c2cd8f9534.slice. Dec 13 00:26:18.640319 kubelet[2837]: I1213 00:26:18.640218 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f70cee-6c2d-4ba1-82db-48c2cd8f9534-config-volume\") pod \"coredns-66bc5c9577-2j2rk\" (UID: \"87f70cee-6c2d-4ba1-82db-48c2cd8f9534\") " pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:18.640319 kubelet[2837]: I1213 00:26:18.640318 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkpv\" (UniqueName: \"kubernetes.io/projected/87f70cee-6c2d-4ba1-82db-48c2cd8f9534-kube-api-access-hnkpv\") pod \"coredns-66bc5c9577-2j2rk\" (UID: \"87f70cee-6c2d-4ba1-82db-48c2cd8f9534\") " pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:18.649995 kubelet[2837]: E1213 00:26:18.649968 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:18.656527 systemd[1]: Created slice kubepods-besteffort-pod7c357da7_f81d_4093_8d71_96d21eb95cdd.slice - libcontainer container kubepods-besteffort-pod7c357da7_f81d_4093_8d71_96d21eb95cdd.slice. Dec 13 00:26:18.804911 systemd[1]: Created slice kubepods-besteffort-podebf778c3_930a_43a3_9210_8534e588628e.slice - libcontainer container kubepods-besteffort-podebf778c3_930a_43a3_9210_8534e588628e.slice. Dec 13 00:26:18.943697 kubelet[2837]: I1213 00:26:18.943557 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctxbb\" (UniqueName: \"kubernetes.io/projected/ebf778c3-930a-43a3-9210-8534e588628e-kube-api-access-ctxbb\") pod \"calico-kube-controllers-6dc9f4c9d-8t67s\" (UID: \"ebf778c3-930a-43a3-9210-8534e588628e\") " pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:18.943697 kubelet[2837]: I1213 00:26:18.943609 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf778c3-930a-43a3-9210-8534e588628e-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc9f4c9d-8t67s\" (UID: \"ebf778c3-930a-43a3-9210-8534e588628e\") " pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:18.967753 containerd[1658]: time="2025-12-13T00:26:18.967687728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:19.144580 systemd[1]: Created slice kubepods-besteffort-pod06395f50_a88f_48a6_b5f1_47617410b0b2.slice - libcontainer container kubepods-besteffort-pod06395f50_a88f_48a6_b5f1_47617410b0b2.slice. Dec 13 00:26:19.221702 systemd[1]: Created slice kubepods-besteffort-podf7f994f0_f034_4b20_81af_4664a13b71bc.slice - libcontainer container kubepods-besteffort-podf7f994f0_f034_4b20_81af_4664a13b71bc.slice. Dec 13 00:26:19.233659 systemd[1]: Created slice kubepods-burstable-podb920e14f_b711_410a_9f57_a0f3b9193e31.slice - libcontainer container kubepods-burstable-podb920e14f_b711_410a_9f57_a0f3b9193e31.slice. Dec 13 00:26:19.244000 audit[3662]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:19.244000 audit[3662]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc7cf683d0 a2=0 a3=7ffc7cf683bc items=0 ppid=2990 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:19.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:19.247441 kubelet[2837]: I1213 00:26:19.246253 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06395f50-a88f-48a6-b5f1-47617410b0b2-calico-apiserver-certs\") pod \"calico-apiserver-8d68b8c7b-pl252\" (UID: \"06395f50-a88f-48a6-b5f1-47617410b0b2\") " pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:19.248371 kubelet[2837]: I1213 00:26:19.246467 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgl7\" (UniqueName: \"kubernetes.io/projected/06395f50-a88f-48a6-b5f1-47617410b0b2-kube-api-access-phgl7\") pod \"calico-apiserver-8d68b8c7b-pl252\" (UID: \"06395f50-a88f-48a6-b5f1-47617410b0b2\") " pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:19.251000 audit[3662]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:19.251000 audit[3662]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc7cf683d0 a2=0 a3=7ffc7cf683bc items=0 ppid=2990 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:19.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:19.257091 systemd[1]: Created slice kubepods-besteffort-pod7f743830_c848_46de_a996_0043d7234fb9.slice - libcontainer container kubepods-besteffort-pod7f743830_c848_46de_a996_0043d7234fb9.slice. Dec 13 00:26:19.265852 systemd[1]: Created slice kubepods-besteffort-pod221701a5_b818_49d6_9c29_c4e060d651fd.slice - libcontainer container kubepods-besteffort-pod221701a5_b818_49d6_9c29_c4e060d651fd.slice. Dec 13 00:26:19.273759 systemd[1]: Created slice kubepods-besteffort-pod4deb6945_66eb_45de_ac81_4441491473f3.slice - libcontainer container kubepods-besteffort-pod4deb6945_66eb_45de_ac81_4441491473f3.slice. Dec 13 00:26:19.318795 containerd[1658]: time="2025-12-13T00:26:19.318726744Z" level=error msg="Failed to destroy network for sandbox \"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.320898 systemd[1]: run-netns-cni\x2d1630660f\x2d44e5\x2dd156\x2d5fcf\x2d867d552e5638.mount: Deactivated successfully. Dec 13 00:26:19.324446 kubelet[2837]: E1213 00:26:19.324403 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:19.324895 containerd[1658]: time="2025-12-13T00:26:19.324842504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.326439 kubelet[2837]: E1213 00:26:19.326366 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.326708 kubelet[2837]: E1213 00:26:19.326607 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:19.326708 kubelet[2837]: E1213 00:26:19.326637 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:19.326912 kubelet[2837]: E1213 00:26:19.326706 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b04d92f4a161f76ebdad3464ccfb82ec57504a47f5b099293fd9d667d2e048e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:19.328530 containerd[1658]: time="2025-12-13T00:26:19.328492044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 00:26:19.349210 kubelet[2837]: I1213 00:26:19.349150 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rtw\" (UniqueName: \"kubernetes.io/projected/f7f994f0-f034-4b20-81af-4664a13b71bc-kube-api-access-92rtw\") pod \"calico-apiserver-8d68b8c7b-klw9n\" (UID: \"f7f994f0-f034-4b20-81af-4664a13b71bc\") " pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:19.349210 kubelet[2837]: I1213 00:26:19.349207 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4deb6945-66eb-45de-ac81-4441491473f3-calico-apiserver-certs\") pod \"calico-apiserver-6f7d758f46-d75hr\" (UID: \"4deb6945-66eb-45de-ac81-4441491473f3\") " pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:19.349480 kubelet[2837]: I1213 00:26:19.349308 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b920e14f-b711-410a-9f57-a0f3b9193e31-config-volume\") pod \"coredns-66bc5c9577-9z4hg\" (UID: \"b920e14f-b711-410a-9f57-a0f3b9193e31\") " pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:19.349480 kubelet[2837]: I1213 00:26:19.349335 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7f994f0-f034-4b20-81af-4664a13b71bc-calico-apiserver-certs\") pod \"calico-apiserver-8d68b8c7b-klw9n\" (UID: \"f7f994f0-f034-4b20-81af-4664a13b71bc\") " pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:19.349480 kubelet[2837]: I1213 00:26:19.349396 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/221701a5-b818-49d6-9c29-c4e060d651fd-goldmane-key-pair\") pod \"goldmane-7c778bb748-6qgdw\" (UID: \"221701a5-b818-49d6-9c29-c4e060d651fd\") " pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.349589 kubelet[2837]: I1213 00:26:19.349484 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221701a5-b818-49d6-9c29-c4e060d651fd-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-6qgdw\" (UID: \"221701a5-b818-49d6-9c29-c4e060d651fd\") " pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.350059 kubelet[2837]: I1213 00:26:19.349993 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfxb\" (UniqueName: \"kubernetes.io/projected/4deb6945-66eb-45de-ac81-4441491473f3-kube-api-access-rnfxb\") pod \"calico-apiserver-6f7d758f46-d75hr\" (UID: \"4deb6945-66eb-45de-ac81-4441491473f3\") " pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:19.350059 kubelet[2837]: I1213 00:26:19.350034 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnr8r\" (UniqueName: \"kubernetes.io/projected/7f743830-c848-46de-a996-0043d7234fb9-kube-api-access-mnr8r\") pod \"whisker-6b5444d6c5-k76g8\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:19.350059 kubelet[2837]: I1213 00:26:19.350055 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd97j\" (UniqueName: \"kubernetes.io/projected/b920e14f-b711-410a-9f57-a0f3b9193e31-kube-api-access-hd97j\") pod \"coredns-66bc5c9577-9z4hg\" (UID: \"b920e14f-b711-410a-9f57-a0f3b9193e31\") " pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:19.350204 kubelet[2837]: I1213 00:26:19.350077 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnj9\" (UniqueName: \"kubernetes.io/projected/221701a5-b818-49d6-9c29-c4e060d651fd-kube-api-access-bxnj9\") pod \"goldmane-7c778bb748-6qgdw\" (UID: \"221701a5-b818-49d6-9c29-c4e060d651fd\") " pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.350204 kubelet[2837]: I1213 00:26:19.350098 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f743830-c848-46de-a996-0043d7234fb9-whisker-backend-key-pair\") pod \"whisker-6b5444d6c5-k76g8\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:19.350204 kubelet[2837]: I1213 00:26:19.350139 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f743830-c848-46de-a996-0043d7234fb9-whisker-ca-bundle\") pod \"whisker-6b5444d6c5-k76g8\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:19.350204 kubelet[2837]: I1213 00:26:19.350162 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221701a5-b818-49d6-9c29-c4e060d651fd-config\") pod \"goldmane-7c778bb748-6qgdw\" (UID: \"221701a5-b818-49d6-9c29-c4e060d651fd\") " pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.410484 containerd[1658]: time="2025-12-13T00:26:19.410437253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:19.450275 containerd[1658]: time="2025-12-13T00:26:19.450069716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:19.469829 kubelet[2837]: E1213 00:26:19.468304 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:19.469949 containerd[1658]: time="2025-12-13T00:26:19.469403348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:19.476552 containerd[1658]: time="2025-12-13T00:26:19.476422062Z" level=error msg="Failed to destroy network for sandbox \"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.484144 containerd[1658]: time="2025-12-13T00:26:19.484065900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.484454 kubelet[2837]: E1213 00:26:19.484397 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.484454 kubelet[2837]: E1213 00:26:19.484459 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:19.484615 kubelet[2837]: E1213 00:26:19.484487 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:19.484615 kubelet[2837]: E1213 00:26:19.484554 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d8debf32c7d3b11150813c03b1984b875065efac4afa699f79c4e2b04efb4ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:26:19.527018 containerd[1658]: time="2025-12-13T00:26:19.526966790Z" level=error msg="Failed to destroy network for sandbox \"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.529588 containerd[1658]: time="2025-12-13T00:26:19.529529991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.529855 kubelet[2837]: E1213 00:26:19.529819 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.529914 kubelet[2837]: E1213 00:26:19.529871 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:19.529914 kubelet[2837]: E1213 00:26:19.529889 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:19.530021 kubelet[2837]: E1213 00:26:19.529942 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92da968c78c822e1aa6c67b765fd0db2f624a85f9d8dc726cd3e82f23f17a9d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:26:19.531876 containerd[1658]: time="2025-12-13T00:26:19.531807656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:19.540240 containerd[1658]: time="2025-12-13T00:26:19.540177658Z" level=error msg="Failed to destroy network for sandbox \"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.542852 containerd[1658]: time="2025-12-13T00:26:19.542819115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.543078 kubelet[2837]: E1213 00:26:19.543004 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.543185 kubelet[2837]: E1213 00:26:19.543092 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:19.543185 kubelet[2837]: E1213 00:26:19.543108 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:19.543332 kubelet[2837]: E1213 00:26:19.543177 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2j2rk_kube-system(87f70cee-6c2d-4ba1-82db-48c2cd8f9534)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2j2rk_kube-system(87f70cee-6c2d-4ba1-82db-48c2cd8f9534)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57faeff8fc34eccb40cf17d7a560c325f5aa145524903d20eab05e3dd23d74d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2j2rk" podUID="87f70cee-6c2d-4ba1-82db-48c2cd8f9534" Dec 13 00:26:19.553263 kubelet[2837]: E1213 00:26:19.553059 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:19.553911 containerd[1658]: time="2025-12-13T00:26:19.553881772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:19.565290 containerd[1658]: time="2025-12-13T00:26:19.564892560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5444d6c5-k76g8,Uid:7f743830-c848-46de-a996-0043d7234fb9,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:19.574601 containerd[1658]: time="2025-12-13T00:26:19.574546542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:19.583969 containerd[1658]: time="2025-12-13T00:26:19.583921819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:19.609498 containerd[1658]: time="2025-12-13T00:26:19.609430481Z" level=error msg="Failed to destroy network for sandbox \"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.614825 containerd[1658]: time="2025-12-13T00:26:19.614759614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.615152 kubelet[2837]: E1213 00:26:19.615088 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.615251 kubelet[2837]: E1213 00:26:19.615166 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:19.615251 kubelet[2837]: E1213 00:26:19.615187 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:19.615520 kubelet[2837]: E1213 00:26:19.615485 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d41e98c43507d1a61b13d4193b09b2968be79703d60c530ba9ca691871499bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:19.652715 containerd[1658]: time="2025-12-13T00:26:19.652664075Z" level=error msg="Failed to destroy network for sandbox \"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.668488 containerd[1658]: time="2025-12-13T00:26:19.668417967Z" level=error msg="Failed to destroy network for sandbox \"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.669155 containerd[1658]: time="2025-12-13T00:26:19.669015649Z" level=error msg="Failed to destroy network for sandbox \"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.672704 containerd[1658]: time="2025-12-13T00:26:19.672656162Z" level=error msg="Failed to destroy network for sandbox \"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.682609 containerd[1658]: time="2025-12-13T00:26:19.682488368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.682902 kubelet[2837]: E1213 00:26:19.682860 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.683362 kubelet[2837]: E1213 00:26:19.682931 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:19.683362 kubelet[2837]: E1213 00:26:19.682952 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:19.683362 kubelet[2837]: E1213 00:26:19.683017 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9z4hg_kube-system(b920e14f-b711-410a-9f57-a0f3b9193e31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9z4hg_kube-system(b920e14f-b711-410a-9f57-a0f3b9193e31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cea17628e292da3e4c0e6f5bb327d2d4a70120369c92f4877f8cba76835d7505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9z4hg" podUID="b920e14f-b711-410a-9f57-a0f3b9193e31" Dec 13 00:26:19.752254 containerd[1658]: time="2025-12-13T00:26:19.752170086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.752549 kubelet[2837]: E1213 00:26:19.752499 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.752631 kubelet[2837]: E1213 00:26:19.752569 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.752631 kubelet[2837]: E1213 00:26:19.752594 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:19.752733 kubelet[2837]: E1213 00:26:19.752687 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f986a781bfec089a9579e2f93d1d40cbf425aecc9f7efe479578415c17d72c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:26:19.761954 containerd[1658]: time="2025-12-13T00:26:19.761888449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.762282 kubelet[2837]: E1213 00:26:19.762201 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.762357 kubelet[2837]: E1213 00:26:19.762313 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:19.762357 kubelet[2837]: E1213 00:26:19.762334 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:19.762448 kubelet[2837]: E1213 00:26:19.762389 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62f9779631d9e4102d023b80aa73ac9c86031f135ad9e93af3ccfd22d8e5576\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:26:19.763138 containerd[1658]: time="2025-12-13T00:26:19.763070778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5444d6c5-k76g8,Uid:7f743830-c848-46de-a996-0043d7234fb9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.763368 kubelet[2837]: E1213 00:26:19.763338 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:19.763442 kubelet[2837]: E1213 00:26:19.763377 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:19.763442 kubelet[2837]: E1213 00:26:19.763396 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:19.763510 kubelet[2837]: E1213 00:26:19.763435 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b5444d6c5-k76g8_calico-system(7f743830-c848-46de-a996-0043d7234fb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b5444d6c5-k76g8_calico-system(7f743830-c848-46de-a996-0043d7234fb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18d7e8696147ea7b2df795999e6b6fd3e78874fce9d8eed23d98268bedef953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b5444d6c5-k76g8" podUID="7f743830-c848-46de-a996-0043d7234fb9" Dec 13 00:26:29.216372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618418001.mount: Deactivated successfully. Dec 13 00:26:33.108480 containerd[1658]: time="2025-12-13T00:26:33.108364031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:33.324955 containerd[1658]: time="2025-12-13T00:26:33.324895197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:33.588858 kubelet[2837]: E1213 00:26:33.588815 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:33.589497 containerd[1658]: time="2025-12-13T00:26:33.589460437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:33.609367 containerd[1658]: time="2025-12-13T00:26:33.609192355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:33.698259 containerd[1658]: time="2025-12-13T00:26:33.698198977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:33.710604 containerd[1658]: time="2025-12-13T00:26:33.709734078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:33.710906 containerd[1658]: time="2025-12-13T00:26:33.710871792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:33.824705 containerd[1658]: time="2025-12-13T00:26:33.824572061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 13 00:26:33.893671 containerd[1658]: time="2025-12-13T00:26:33.893533706Z" level=error msg="Failed to destroy network for sandbox \"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:33.917258 containerd[1658]: time="2025-12-13T00:26:33.917156274Z" level=error msg="Failed to destroy network for sandbox \"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.102017 containerd[1658]: time="2025-12-13T00:26:34.101942750Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:34.137812 containerd[1658]: time="2025-12-13T00:26:34.137741449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.138624 containerd[1658]: time="2025-12-13T00:26:34.137916146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.139010 kubelet[2837]: E1213 00:26:34.138966 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.139119 kubelet[2837]: E1213 00:26:34.139037 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:34.139119 kubelet[2837]: E1213 00:26:34.139060 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" Dec 13 00:26:34.139176 kubelet[2837]: E1213 00:26:34.139143 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"810bec37930544f353a8b6acdb7666c8b1b3004d3fa3c0fd1319bc439a2042b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:26:34.139496 kubelet[2837]: E1213 00:26:34.138844 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.139568 kubelet[2837]: E1213 00:26:34.139515 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:34.139620 kubelet[2837]: E1213 00:26:34.139566 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgdw" Dec 13 00:26:34.139620 kubelet[2837]: E1213 00:26:34.139645 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cec4a6d7400449e1b711c88a36c2d200a0263955e2b3b4cee32e8fc66bba9560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:26:34.140223 containerd[1658]: time="2025-12-13T00:26:34.140151821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:26:34.145715 containerd[1658]: time="2025-12-13T00:26:34.144155111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.815613784s" Dec 13 00:26:34.145715 containerd[1658]: time="2025-12-13T00:26:34.144212769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 13 00:26:34.162302 containerd[1658]: time="2025-12-13T00:26:34.162197969Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 00:26:34.177317 containerd[1658]: time="2025-12-13T00:26:34.177191307Z" level=info msg="Container b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:34.201810 containerd[1658]: time="2025-12-13T00:26:34.201437233Z" level=info msg="CreateContainer within sandbox \"25ca9d91ceccb9bfdac7e04323876cb50aa0ace0252c1489cf5cca5321e24a74\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed\"" Dec 13 00:26:34.204252 containerd[1658]: time="2025-12-13T00:26:34.202844994Z" level=info msg="StartContainer for \"b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed\"" Dec 13 00:26:34.206828 containerd[1658]: time="2025-12-13T00:26:34.206795355Z" level=info msg="connecting to shim b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed" address="unix:///run/containerd/s/e03de88eeafe4266c01c428ece5ad8fc0080aba9f8e99c77ead0f584bd120a27" protocol=ttrpc version=3 Dec 13 00:26:34.221554 containerd[1658]: time="2025-12-13T00:26:34.221504731Z" level=error msg="Failed to destroy network for sandbox \"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.224907 containerd[1658]: time="2025-12-13T00:26:34.224848293Z" level=error msg="Failed to destroy network for sandbox \"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.226876 containerd[1658]: time="2025-12-13T00:26:34.226658940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.227375 kubelet[2837]: E1213 00:26:34.227335 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.227590 kubelet[2837]: E1213 00:26:34.227563 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:34.227715 kubelet[2837]: E1213 00:26:34.227693 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" Dec 13 00:26:34.228340 kubelet[2837]: E1213 00:26:34.227996 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7618b9e2c52b75ef55aa26a42c13c65e16dad0427acf79dfb8e703b30f98515f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:26:34.237316 containerd[1658]: time="2025-12-13T00:26:34.237258515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.237561 kubelet[2837]: E1213 00:26:34.237464 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.237561 kubelet[2837]: E1213 00:26:34.237508 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:34.237561 kubelet[2837]: E1213 00:26:34.237524 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pkv7" Dec 13 00:26:34.237656 kubelet[2837]: E1213 00:26:34.237572 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"364e627d9dc6a1511efb5ce9a2009fdf43e6c99d2ed37194eda6be91578afc2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:34.255756 containerd[1658]: time="2025-12-13T00:26:34.255583302Z" level=error msg="Failed to destroy network for sandbox \"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.260955 containerd[1658]: time="2025-12-13T00:26:34.260898744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.261157 kubelet[2837]: E1213 00:26:34.261090 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.261223 kubelet[2837]: E1213 00:26:34.261151 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:34.261223 kubelet[2837]: E1213 00:26:34.261179 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" Dec 13 00:26:34.261398 containerd[1658]: time="2025-12-13T00:26:34.261144455Z" level=error msg="Failed to destroy network for sandbox \"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.261465 kubelet[2837]: E1213 00:26:34.261260 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43cb4bf8f0263904fb35a36550cb5c42401ebcbe880a41b36d35e48517c4419b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:26:34.263638 containerd[1658]: time="2025-12-13T00:26:34.263564855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.263867 kubelet[2837]: E1213 00:26:34.263827 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.263945 kubelet[2837]: E1213 00:26:34.263877 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:34.263945 kubelet[2837]: E1213 00:26:34.263897 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9z4hg" Dec 13 00:26:34.264000 kubelet[2837]: E1213 00:26:34.263947 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9z4hg_kube-system(b920e14f-b711-410a-9f57-a0f3b9193e31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9z4hg_kube-system(b920e14f-b711-410a-9f57-a0f3b9193e31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4ce6453797384fa900c375fd46c3918053fdc48297bd9f32211e69e88bcf250\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9z4hg" podUID="b920e14f-b711-410a-9f57-a0f3b9193e31" Dec 13 00:26:34.267572 systemd[1]: Started cri-containerd-b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed.scope - libcontainer container b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed. Dec 13 00:26:34.340000 audit: BPF prog-id=176 op=LOAD Dec 13 00:26:34.341927 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 13 00:26:34.342067 kernel: audit: type=1334 audit(1765585594.340:577): prog-id=176 op=LOAD Dec 13 00:26:34.340000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.350907 kernel: audit: type=1300 audit(1765585594.340:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.351121 kernel: audit: type=1327 audit(1765585594.340:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: BPF prog-id=177 op=LOAD Dec 13 00:26:34.359751 kernel: audit: type=1334 audit(1765585594.340:578): prog-id=177 op=LOAD Dec 13 00:26:34.359800 kernel: audit: type=1300 audit(1765585594.340:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.340000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.367284 kernel: audit: type=1327 audit(1765585594.340:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.373544 kernel: audit: type=1334 audit(1765585594.340:579): prog-id=177 op=UNLOAD Dec 13 00:26:34.340000 audit: BPF prog-id=177 op=UNLOAD Dec 13 00:26:34.374960 kernel: audit: type=1300 audit(1765585594.340:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.340000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.380145 kernel: audit: type=1327 audit(1765585594.340:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: BPF prog-id=176 op=UNLOAD Dec 13 00:26:34.387286 kernel: audit: type=1334 audit(1765585594.340:580): prog-id=176 op=UNLOAD Dec 13 00:26:34.340000 audit[4155]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.340000 audit: BPF prog-id=178 op=LOAD Dec 13 00:26:34.340000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3392 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:34.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238656432653463623234313662663835383061343639306663386466 Dec 13 00:26:34.395045 containerd[1658]: time="2025-12-13T00:26:34.394965074Z" level=info msg="StartContainer for \"b8ed2e4cb2416bf8580a4690fc8df5841a8a0e8b560cf98c528412812ddb90ed\" returns successfully" Dec 13 00:26:34.469801 containerd[1658]: time="2025-12-13T00:26:34.468688421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:34.471053 kubelet[2837]: E1213 00:26:34.471023 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:34.474499 containerd[1658]: time="2025-12-13T00:26:34.474456863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:34.476319 containerd[1658]: time="2025-12-13T00:26:34.475808177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5444d6c5-k76g8,Uid:7f743830-c848-46de-a996-0043d7234fb9,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:34.495185 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 00:26:34.495358 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 00:26:34.582469 containerd[1658]: time="2025-12-13T00:26:34.582414924Z" level=error msg="Failed to destroy network for sandbox \"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.585323 containerd[1658]: time="2025-12-13T00:26:34.585167127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5444d6c5-k76g8,Uid:7f743830-c848-46de-a996-0043d7234fb9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.586175 kubelet[2837]: E1213 00:26:34.585745 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.586175 kubelet[2837]: E1213 00:26:34.585813 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:34.586175 kubelet[2837]: E1213 00:26:34.585838 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b5444d6c5-k76g8" Dec 13 00:26:34.586339 kubelet[2837]: E1213 00:26:34.585919 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b5444d6c5-k76g8_calico-system(7f743830-c848-46de-a996-0043d7234fb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b5444d6c5-k76g8_calico-system(7f743830-c848-46de-a996-0043d7234fb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0506d840806641a19b2bac155a25961da212819073048d31533ce7e734824307\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b5444d6c5-k76g8" podUID="7f743830-c848-46de-a996-0043d7234fb9" Dec 13 00:26:34.672361 systemd[1]: run-netns-cni\x2d0fbb44df\x2de8ff\x2d0577\x2dd9a7\x2d0bb40c592dbc.mount: Deactivated successfully. Dec 13 00:26:34.672491 systemd[1]: run-netns-cni\x2d6f0b5cd5\x2d4054\x2d51e3\x2d5c4c\x2d073e84180e0a.mount: Deactivated successfully. Dec 13 00:26:34.672565 systemd[1]: run-netns-cni\x2d0c8b8dae\x2d720f\x2dca39\x2dfd4b\x2dd2fe0f8f9954.mount: Deactivated successfully. Dec 13 00:26:34.672638 systemd[1]: run-netns-cni\x2d5e02fc70\x2dcd9a\x2d8fdc\x2d7098\x2d94c0e3c7b3d1.mount: Deactivated successfully. Dec 13 00:26:34.672740 systemd[1]: run-netns-cni\x2d017d1b6d\x2d7919\x2de9dd\x2d924a\x2d3cb8c821d1de.mount: Deactivated successfully. Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.784 [INFO][4270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.785 [INFO][4270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" iface="eth0" netns="/var/run/netns/cni-34160dbe-f2d8-1fde-149f-868357d52ba6" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.786 [INFO][4270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" iface="eth0" netns="/var/run/netns/cni-34160dbe-f2d8-1fde-149f-868357d52ba6" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.787 [INFO][4270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" iface="eth0" netns="/var/run/netns/cni-34160dbe-f2d8-1fde-149f-868357d52ba6" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.787 [INFO][4270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.787 [INFO][4270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.880 [INFO][4311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" HandleID="k8s-pod-network.30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.881 [INFO][4311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:34.897089 containerd[1658]: 2025-12-13 00:26:34.882 [INFO][4311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:34.897742 containerd[1658]: 2025-12-13 00:26:34.889 [WARNING][4311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" HandleID="k8s-pod-network.30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:34.897742 containerd[1658]: 2025-12-13 00:26:34.889 [INFO][4311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" HandleID="k8s-pod-network.30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:34.897742 containerd[1658]: 2025-12-13 00:26:34.890 [INFO][4311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:34.897742 containerd[1658]: 2025-12-13 00:26:34.894 [INFO][4270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4" Dec 13 00:26:34.901772 systemd[1]: run-netns-cni\x2d34160dbe\x2df2d8\x2d1fde\x2d149f\x2d868357d52ba6.mount: Deactivated successfully. Dec 13 00:26:34.904789 containerd[1658]: time="2025-12-13T00:26:34.904591161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.905042 kubelet[2837]: E1213 00:26:34.904991 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.905476 kubelet[2837]: E1213 00:26:34.905066 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:34.905476 kubelet[2837]: E1213 00:26:34.905091 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" Dec 13 00:26:34.905476 kubelet[2837]: E1213 00:26:34.905170 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f13e5f2bbd5363028c02dccd0b4851d81f4d055a925f8761aae7d7ead44bb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.779 [INFO][4290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.779 [INFO][4290] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" iface="eth0" netns="/var/run/netns/cni-8260ad91-272d-d426-3cd3-55fbdcb394b6" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.780 [INFO][4290] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" iface="eth0" netns="/var/run/netns/cni-8260ad91-272d-d426-3cd3-55fbdcb394b6" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.781 [INFO][4290] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" iface="eth0" netns="/var/run/netns/cni-8260ad91-272d-d426-3cd3-55fbdcb394b6" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.781 [INFO][4290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.781 [INFO][4290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.880 [INFO][4309] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" HandleID="k8s-pod-network.81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.882 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:34.913906 containerd[1658]: 2025-12-13 00:26:34.890 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:34.914306 containerd[1658]: 2025-12-13 00:26:34.900 [WARNING][4309] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" HandleID="k8s-pod-network.81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:34.914306 containerd[1658]: 2025-12-13 00:26:34.900 [INFO][4309] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" HandleID="k8s-pod-network.81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:34.914306 containerd[1658]: 2025-12-13 00:26:34.905 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:34.914306 containerd[1658]: 2025-12-13 00:26:34.909 [INFO][4290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1" Dec 13 00:26:34.917523 systemd[1]: run-netns-cni\x2d8260ad91\x2d272d\x2dd426\x2d3cd3\x2d55fbdcb394b6.mount: Deactivated successfully. Dec 13 00:26:34.919668 containerd[1658]: time="2025-12-13T00:26:34.919543191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.919862 kubelet[2837]: E1213 00:26:34.919823 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:26:34.919961 kubelet[2837]: E1213 00:26:34.919881 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:34.919961 kubelet[2837]: E1213 00:26:34.919901 2837 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2j2rk" Dec 13 00:26:34.920014 kubelet[2837]: E1213 00:26:34.919961 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2j2rk_kube-system(87f70cee-6c2d-4ba1-82db-48c2cd8f9534)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2j2rk_kube-system(87f70cee-6c2d-4ba1-82db-48c2cd8f9534)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81bb7252efb3c3bda661510c0abf13c7dc9596c3a0fe25653b73312bd7a048f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2j2rk" podUID="87f70cee-6c2d-4ba1-82db-48c2cd8f9534" Dec 13 00:26:34.990881 kubelet[2837]: E1213 00:26:34.990840 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:34.994183 containerd[1658]: time="2025-12-13T00:26:34.994146257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:34.998445 kubelet[2837]: E1213 00:26:34.998016 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:34.999106 containerd[1658]: time="2025-12-13T00:26:34.999030611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:35.024307 kubelet[2837]: I1213 00:26:35.023117 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wl4h9" podStartSLOduration=2.029888205 podStartE2EDuration="29.023100386s" podCreationTimestamp="2025-12-13 00:26:06 +0000 UTC" firstStartedPulling="2025-12-13 00:26:07.153730109 +0000 UTC m=+26.781266316" lastFinishedPulling="2025-12-13 00:26:34.14694229 +0000 UTC m=+53.774478497" observedRunningTime="2025-12-13 00:26:35.018716141 +0000 UTC m=+54.646252348" watchObservedRunningTime="2025-12-13 00:26:35.023100386 +0000 UTC m=+54.650636593" Dec 13 00:26:35.159605 kubelet[2837]: I1213 00:26:35.159405 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f743830-c848-46de-a996-0043d7234fb9-whisker-ca-bundle\") pod \"7f743830-c848-46de-a996-0043d7234fb9\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " Dec 13 00:26:35.161707 kubelet[2837]: I1213 00:26:35.160943 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f743830-c848-46de-a996-0043d7234fb9-whisker-backend-key-pair\") pod \"7f743830-c848-46de-a996-0043d7234fb9\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " Dec 13 00:26:35.161707 kubelet[2837]: I1213 00:26:35.160996 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnr8r\" (UniqueName: \"kubernetes.io/projected/7f743830-c848-46de-a996-0043d7234fb9-kube-api-access-mnr8r\") pod \"7f743830-c848-46de-a996-0043d7234fb9\" (UID: \"7f743830-c848-46de-a996-0043d7234fb9\") " Dec 13 00:26:35.162215 kubelet[2837]: I1213 00:26:35.162181 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f743830-c848-46de-a996-0043d7234fb9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7f743830-c848-46de-a996-0043d7234fb9" (UID: "7f743830-c848-46de-a996-0043d7234fb9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 00:26:35.180064 kubelet[2837]: I1213 00:26:35.180008 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f743830-c848-46de-a996-0043d7234fb9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7f743830-c848-46de-a996-0043d7234fb9" (UID: "7f743830-c848-46de-a996-0043d7234fb9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 00:26:35.180219 kubelet[2837]: I1213 00:26:35.180161 2837 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f743830-c848-46de-a996-0043d7234fb9-kube-api-access-mnr8r" (OuterVolumeSpecName: "kube-api-access-mnr8r") pod "7f743830-c848-46de-a996-0043d7234fb9" (UID: "7f743830-c848-46de-a996-0043d7234fb9"). InnerVolumeSpecName "kube-api-access-mnr8r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 00:26:35.257360 systemd-networkd[1316]: cali6e4faeae772: Link UP Dec 13 00:26:35.258681 systemd-networkd[1316]: cali6e4faeae772: Gained carrier Dec 13 00:26:35.261872 kubelet[2837]: I1213 00:26:35.261836 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f743830-c848-46de-a996-0043d7234fb9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:35.261872 kubelet[2837]: I1213 00:26:35.261868 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f743830-c848-46de-a996-0043d7234fb9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:35.261872 kubelet[2837]: I1213 00:26:35.261877 2837 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mnr8r\" (UniqueName: \"kubernetes.io/projected/7f743830-c848-46de-a996-0043d7234fb9-kube-api-access-mnr8r\") on node \"localhost\" DevicePath \"\"" Dec 13 00:26:35.650474 containerd[1658]: 2025-12-13 00:26:35.088 [INFO][4342] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:35.650474 containerd[1658]: 2025-12-13 00:26:35.109 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0 calico-apiserver-8d68b8c7b- calico-apiserver f7f994f0-f034-4b20-81af-4664a13b71bc 937 0 2025-12-13 00:26:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d68b8c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d68b8c7b-klw9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e4faeae772 [] [] }} ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-" Dec 13 00:26:35.650474 containerd[1658]: 2025-12-13 00:26:35.109 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.650474 containerd[1658]: 2025-12-13 00:26:35.154 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" HandleID="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.155 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" HandleID="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d68b8c7b-klw9n", "timestamp":"2025-12-13 00:26:35.154694923 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.155 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.155 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.155 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.167 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" host="localhost" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.191 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.204 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.208 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.210 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:35.651734 containerd[1658]: 2025-12-13 00:26:35.210 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" host="localhost" Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.212 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.225 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" host="localhost" Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" host="localhost" Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" host="localhost" Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:35.652125 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" HandleID="k8s-pod-network.e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.653428 containerd[1658]: 2025-12-13 00:26:35.241 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0", GenerateName:"calico-apiserver-8d68b8c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7f994f0-f034-4b20-81af-4664a13b71bc", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d68b8c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d68b8c7b-klw9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e4faeae772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:35.653523 containerd[1658]: 2025-12-13 00:26:35.241 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.653523 containerd[1658]: 2025-12-13 00:26:35.241 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e4faeae772 ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.653523 containerd[1658]: 2025-12-13 00:26:35.258 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.653630 containerd[1658]: 2025-12-13 00:26:35.258 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0", GenerateName:"calico-apiserver-8d68b8c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7f994f0-f034-4b20-81af-4664a13b71bc", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d68b8c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb", Pod:"calico-apiserver-8d68b8c7b-klw9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e4faeae772", MAC:"ca:ec:8a:26:10:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:35.653733 containerd[1658]: 2025-12-13 00:26:35.640 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-klw9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--klw9n-eth0" Dec 13 00:26:35.670901 systemd[1]: var-lib-kubelet-pods-7f743830\x2dc848\x2d46de\x2da996\x2d0043d7234fb9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmnr8r.mount: Deactivated successfully. Dec 13 00:26:35.671070 systemd[1]: var-lib-kubelet-pods-7f743830\x2dc848\x2d46de\x2da996\x2d0043d7234fb9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 00:26:35.691000 audit: BPF prog-id=179 op=LOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b7375b0 a2=98 a3=1fffffffffffffff items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.691000 audit: BPF prog-id=179 op=UNLOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff7b737580 a3=0 items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.691000 audit: BPF prog-id=180 op=LOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b737490 a2=94 a3=3 items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.691000 audit: BPF prog-id=180 op=UNLOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7b737490 a2=94 a3=3 items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.691000 audit: BPF prog-id=181 op=LOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b7374d0 a2=94 a3=7fff7b7376b0 items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.691000 audit: BPF prog-id=181 op=UNLOAD Dec 13 00:26:35.691000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7b7374d0 a2=94 a3=7fff7b7376b0 items=0 ppid=4400 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:26:35.692000 audit: BPF prog-id=182 op=LOAD Dec 13 00:26:35.692000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff75d90c30 a2=98 a3=3 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.692000 audit: BPF prog-id=182 op=UNLOAD Dec 13 00:26:35.692000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff75d90c00 a3=0 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.693000 audit: BPF prog-id=183 op=LOAD Dec 13 00:26:35.693000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff75d90a20 a2=94 a3=54428f items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.693000 audit: BPF prog-id=183 op=UNLOAD Dec 13 00:26:35.693000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff75d90a20 a2=94 a3=54428f items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.693000 audit: BPF prog-id=184 op=LOAD Dec 13 00:26:35.693000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff75d90a50 a2=94 a3=2 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.693000 audit: BPF prog-id=184 op=UNLOAD Dec 13 00:26:35.693000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff75d90a50 a2=0 a3=2 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.891000 audit: BPF prog-id=185 op=LOAD Dec 13 00:26:35.891000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff75d90910 a2=94 a3=1 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.891000 audit: BPF prog-id=185 op=UNLOAD Dec 13 00:26:35.891000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff75d90910 a2=94 a3=1 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.900000 audit: BPF prog-id=186 op=LOAD Dec 13 00:26:35.900000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff75d90900 a2=94 a3=4 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.900000 audit: BPF prog-id=186 op=UNLOAD Dec 13 00:26:35.900000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff75d90900 a2=0 a3=4 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.901000 audit: BPF prog-id=187 op=LOAD Dec 13 00:26:35.901000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff75d90760 a2=94 a3=5 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.901000 audit: BPF prog-id=187 op=UNLOAD Dec 13 00:26:35.901000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff75d90760 a2=0 a3=5 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.901000 audit: BPF prog-id=188 op=LOAD Dec 13 00:26:35.901000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff75d90980 a2=94 a3=6 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.901000 audit: BPF prog-id=188 op=UNLOAD Dec 13 00:26:35.901000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff75d90980 a2=0 a3=6 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.901000 audit: BPF prog-id=189 op=LOAD Dec 13 00:26:35.901000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff75d90130 a2=94 a3=88 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.902000 audit: BPF prog-id=190 op=LOAD Dec 13 00:26:35.902000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff75d8ffb0 a2=94 a3=2 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.902000 audit: BPF prog-id=190 op=UNLOAD Dec 13 00:26:35.902000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff75d8ffe0 a2=0 a3=7fff75d900e0 items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.903000 audit: BPF prog-id=189 op=UNLOAD Dec 13 00:26:35.903000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1e3e6d10 a2=0 a3=a3549ad64320072d items=0 ppid=4400 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.903000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:26:35.912000 audit: BPF prog-id=191 op=LOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff05f04590 a2=98 a3=1999999999999999 items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.912000 audit: BPF prog-id=191 op=UNLOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff05f04560 a3=0 items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.912000 audit: BPF prog-id=192 op=LOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff05f04470 a2=94 a3=ffff items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.912000 audit: BPF prog-id=192 op=UNLOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff05f04470 a2=94 a3=ffff items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.912000 audit: BPF prog-id=193 op=LOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff05f044b0 a2=94 a3=7fff05f04690 items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.912000 audit: BPF prog-id=193 op=UNLOAD Dec 13 00:26:35.912000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff05f044b0 a2=94 a3=7fff05f04690 items=0 ppid=4400 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:35.912000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:26:35.981965 systemd-networkd[1316]: vxlan.calico: Link UP Dec 13 00:26:35.981978 systemd-networkd[1316]: vxlan.calico: Gained carrier Dec 13 00:26:35.993740 kubelet[2837]: E1213 00:26:35.993479 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:36.000216 systemd[1]: Removed slice kubepods-besteffort-pod7f743830_c848_46de_a996_0043d7234fb9.slice - libcontainer container kubepods-besteffort-pod7f743830_c848_46de_a996_0043d7234fb9.slice. Dec 13 00:26:36.004000 audit: BPF prog-id=194 op=LOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe45ad0a30 a2=98 a3=0 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.004000 audit: BPF prog-id=194 op=UNLOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe45ad0a00 a3=0 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.004000 audit: BPF prog-id=195 op=LOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe45ad0840 a2=94 a3=54428f items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.004000 audit: BPF prog-id=195 op=UNLOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe45ad0840 a2=94 a3=54428f items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.004000 audit: BPF prog-id=196 op=LOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe45ad0870 a2=94 a3=2 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.004000 audit: BPF prog-id=196 op=UNLOAD Dec 13 00:26:36.004000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe45ad0870 a2=0 a3=2 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.004000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.005000 audit: BPF prog-id=197 op=LOAD Dec 13 00:26:36.005000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe45ad0620 a2=94 a3=4 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.005000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.005000 audit: BPF prog-id=197 op=UNLOAD Dec 13 00:26:36.005000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe45ad0620 a2=94 a3=4 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.005000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.005000 audit: BPF prog-id=198 op=LOAD Dec 13 00:26:36.005000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe45ad0720 a2=94 a3=7ffe45ad08a0 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.005000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.005000 audit: BPF prog-id=198 op=UNLOAD Dec 13 00:26:36.005000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe45ad0720 a2=0 a3=7ffe45ad08a0 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.005000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.007000 audit: BPF prog-id=199 op=LOAD Dec 13 00:26:36.007000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe45acfe50 a2=94 a3=2 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.007000 audit: BPF prog-id=199 op=UNLOAD Dec 13 00:26:36.007000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe45acfe50 a2=0 a3=2 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.007000 audit: BPF prog-id=200 op=LOAD Dec 13 00:26:36.007000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe45acff50 a2=94 a3=30 items=0 ppid=4400 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.007000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:26:36.018000 audit: BPF prog-id=201 op=LOAD Dec 13 00:26:36.018000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd073d0300 a2=98 a3=0 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.018000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.019000 audit: BPF prog-id=201 op=UNLOAD Dec 13 00:26:36.019000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd073d02d0 a3=0 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.019000 audit: BPF prog-id=202 op=LOAD Dec 13 00:26:36.019000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd073d00f0 a2=94 a3=54428f items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.019000 audit: BPF prog-id=202 op=UNLOAD Dec 13 00:26:36.019000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd073d00f0 a2=94 a3=54428f items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.019000 audit: BPF prog-id=203 op=LOAD Dec 13 00:26:36.019000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd073d0120 a2=94 a3=2 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.019000 audit: BPF prog-id=203 op=UNLOAD Dec 13 00:26:36.019000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd073d0120 a2=0 a3=2 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.019000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.073610 systemd-networkd[1316]: cali5e77ed0b5a8: Link UP Dec 13 00:26:36.074129 systemd-networkd[1316]: cali5e77ed0b5a8: Gained carrier Dec 13 00:26:36.204522 containerd[1658]: 2025-12-13 00:26:35.109 [INFO][4368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:26:36.204522 containerd[1658]: 2025-12-13 00:26:35.126 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--2j2rk-eth0 coredns-66bc5c9577- kube-system 87f70cee-6c2d-4ba1-82db-48c2cd8f9534 935 0 2025-12-13 00:25:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-2j2rk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e77ed0b5a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-" Dec 13 00:26:36.204522 containerd[1658]: 2025-12-13 00:26:35.129 [INFO][4368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.204522 containerd[1658]: 2025-12-13 00:26:35.198 [INFO][4479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" HandleID="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.202 [INFO][4479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" HandleID="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-2j2rk", "timestamp":"2025-12-13 00:26:35.198897407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.202 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.235 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.265 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" host="localhost" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.644 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.654 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.659 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.663 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:36.204826 containerd[1658]: 2025-12-13 00:26:35.663 [INFO][4479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" host="localhost" Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:35.667 [INFO][4479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:35.698 [INFO][4479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" host="localhost" Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:36.061 [INFO][4479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" host="localhost" Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:36.061 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" host="localhost" Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:36.061 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:36.205212 containerd[1658]: 2025-12-13 00:26:36.061 [INFO][4479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" HandleID="k8s-pod-network.1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Workload="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.070 [INFO][4368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2j2rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87f70cee-6c2d-4ba1-82db-48c2cd8f9534", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-2j2rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e77ed0b5a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.071 [INFO][4368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.071 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e77ed0b5a8 ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.073 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.076 [INFO][4368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2j2rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87f70cee-6c2d-4ba1-82db-48c2cd8f9534", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed", Pod:"coredns-66bc5c9577-2j2rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e77ed0b5a8", MAC:"8e:5d:ea:8e:f7:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:36.205416 containerd[1658]: 2025-12-13 00:26:36.199 [INFO][4368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" Namespace="kube-system" Pod="coredns-66bc5c9577-2j2rk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2j2rk-eth0" Dec 13 00:26:36.251000 audit: BPF prog-id=204 op=LOAD Dec 13 00:26:36.251000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd073cffe0 a2=94 a3=1 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.251000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.251000 audit: BPF prog-id=204 op=UNLOAD Dec 13 00:26:36.251000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd073cffe0 a2=94 a3=1 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.251000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.274000 audit: BPF prog-id=205 op=LOAD Dec 13 00:26:36.274000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd073cffd0 a2=94 a3=4 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.274000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.275000 audit: BPF prog-id=205 op=UNLOAD Dec 13 00:26:36.275000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd073cffd0 a2=0 a3=4 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.275000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.276000 audit: BPF prog-id=206 op=LOAD Dec 13 00:26:36.276000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd073cfe30 a2=94 a3=5 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.276000 audit: BPF prog-id=206 op=UNLOAD Dec 13 00:26:36.276000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd073cfe30 a2=0 a3=5 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.280000 audit: BPF prog-id=207 op=LOAD Dec 13 00:26:36.280000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd073d0050 a2=94 a3=6 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.280000 audit: BPF prog-id=207 op=UNLOAD Dec 13 00:26:36.280000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd073d0050 a2=0 a3=6 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.280000 audit: BPF prog-id=208 op=LOAD Dec 13 00:26:36.280000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd073cf800 a2=94 a3=88 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.280000 audit: BPF prog-id=209 op=LOAD Dec 13 00:26:36.280000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd073cf680 a2=94 a3=2 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.280000 audit: BPF prog-id=209 op=UNLOAD Dec 13 00:26:36.280000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd073cf6b0 a2=0 a3=7ffd073cf7b0 items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.283000 audit: BPF prog-id=208 op=UNLOAD Dec 13 00:26:36.283000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1290cd10 a2=0 a3=bf2446252cd73a3c items=0 ppid=4400 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.283000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:26:36.295000 audit: BPF prog-id=200 op=UNLOAD Dec 13 00:26:36.295000 audit[4400]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000714580 a2=0 a3=0 items=0 ppid=4340 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.295000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 00:26:36.312279 containerd[1658]: time="2025-12-13T00:26:36.312043668Z" level=info msg="connecting to shim 1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed" address="unix:///run/containerd/s/a01f43c61ac42707b9c09bef6b5500319600240b6d9282ca835630e227e5a306" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:36.315966 containerd[1658]: time="2025-12-13T00:26:36.313700436Z" level=info msg="connecting to shim e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb" address="unix:///run/containerd/s/c99c43b15e7522e4a5aff2249b86fe0de214d4b79b3a77cb24e4b0cf7befa58a" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:36.375113 systemd[1]: Created slice kubepods-besteffort-pod673f283b_d5b4_4c9a_b0fa_82c7c33a08c0.slice - libcontainer container kubepods-besteffort-pod673f283b_d5b4_4c9a_b0fa_82c7c33a08c0.slice. Dec 13 00:26:36.390587 systemd[1]: Started cri-containerd-e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb.scope - libcontainer container e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb. Dec 13 00:26:36.395000 audit[4706]: NETFILTER_CFG table=mangle:117 family=2 entries=16 op=nft_register_chain pid=4706 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.395000 audit[4706]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd5ddbd080 a2=0 a3=7ffd5ddbd06c items=0 ppid=4400 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.395000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.405000 audit[4705]: NETFILTER_CFG table=raw:118 family=2 entries=21 op=nft_register_chain pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.405000 audit[4705]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdb20243e0 a2=0 a3=7ffdb20243cc items=0 ppid=4400 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.405000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.407000 audit[4711]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.407000 audit[4711]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc42fb6260 a2=0 a3=7ffc42fb624c items=0 ppid=4400 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.407000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.429956 systemd[1]: Started cri-containerd-1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed.scope - libcontainer container 1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed. Dec 13 00:26:36.432000 audit: BPF prog-id=210 op=LOAD Dec 13 00:26:36.433000 audit: BPF prog-id=211 op=LOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=211 op=UNLOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=212 op=LOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=213 op=LOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=213 op=UNLOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=212 op=UNLOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.433000 audit: BPF prog-id=214 op=LOAD Dec 13 00:26:36.433000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4641 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530353237326462333564383230646634383361333030313965633330 Dec 13 00:26:36.437347 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:36.429000 audit[4719]: NETFILTER_CFG table=filter:120 family=2 entries=81 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.429000 audit[4719]: SYSCALL arch=c000003e syscall=46 success=yes exit=44276 a0=3 a1=7ffef1cd92a0 a2=0 a3=7ffef1cd928c items=0 ppid=4400 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.429000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.453000 audit: BPF prog-id=215 op=LOAD Dec 13 00:26:36.454000 audit: BPF prog-id=216 op=LOAD Dec 13 00:26:36.454000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.454000 audit: BPF prog-id=216 op=UNLOAD Dec 13 00:26:36.454000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.456000 audit: BPF prog-id=217 op=LOAD Dec 13 00:26:36.456000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.456000 audit: BPF prog-id=218 op=LOAD Dec 13 00:26:36.456000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.456000 audit: BPF prog-id=218 op=UNLOAD Dec 13 00:26:36.456000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.456000 audit: BPF prog-id=217 op=UNLOAD Dec 13 00:26:36.456000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.456000 audit: BPF prog-id=219 op=LOAD Dec 13 00:26:36.456000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4647 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333964366664643165303134656466633864656538343730373661 Dec 13 00:26:36.460868 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:36.465026 kubelet[2837]: I1213 00:26:36.464985 2837 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f743830-c848-46de-a996-0043d7234fb9" path="/var/lib/kubelet/pods/7f743830-c848-46de-a996-0043d7234fb9/volumes" Dec 13 00:26:36.473901 kubelet[2837]: I1213 00:26:36.473846 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/673f283b-d5b4-4c9a-b0fa-82c7c33a08c0-whisker-ca-bundle\") pod \"whisker-6559dbc8c-wbknq\" (UID: \"673f283b-d5b4-4c9a-b0fa-82c7c33a08c0\") " pod="calico-system/whisker-6559dbc8c-wbknq" Dec 13 00:26:36.473901 kubelet[2837]: I1213 00:26:36.473891 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsz6\" (UniqueName: \"kubernetes.io/projected/673f283b-d5b4-4c9a-b0fa-82c7c33a08c0-kube-api-access-jxsz6\") pod \"whisker-6559dbc8c-wbknq\" (UID: \"673f283b-d5b4-4c9a-b0fa-82c7c33a08c0\") " pod="calico-system/whisker-6559dbc8c-wbknq" Dec 13 00:26:36.474064 kubelet[2837]: I1213 00:26:36.473924 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/673f283b-d5b4-4c9a-b0fa-82c7c33a08c0-whisker-backend-key-pair\") pod \"whisker-6559dbc8c-wbknq\" (UID: \"673f283b-d5b4-4c9a-b0fa-82c7c33a08c0\") " pod="calico-system/whisker-6559dbc8c-wbknq" Dec 13 00:26:36.498320 containerd[1658]: time="2025-12-13T00:26:36.498262973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-klw9n,Uid:f7f994f0-f034-4b20-81af-4664a13b71bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e05272db35d820df483a30019ec3007788529976ba55522d27808465f232bebb\"" Dec 13 00:26:36.502182 containerd[1658]: time="2025-12-13T00:26:36.502164081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:36.520380 containerd[1658]: time="2025-12-13T00:26:36.520315272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2j2rk,Uid:87f70cee-6c2d-4ba1-82db-48c2cd8f9534,Namespace:kube-system,Attempt:0,} returns sandbox id \"1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed\"" Dec 13 00:26:36.521557 kubelet[2837]: E1213 00:26:36.521500 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:36.524000 audit[4751]: NETFILTER_CFG table=filter:121 family=2 entries=42 op=nft_register_chain pid=4751 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.524000 audit[4751]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd7b178ce0 a2=0 a3=7ffd7b178ccc items=0 ppid=4400 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.524000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.527557 containerd[1658]: time="2025-12-13T00:26:36.527515840Z" level=info msg="CreateContainer within sandbox \"1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:26:36.543260 containerd[1658]: time="2025-12-13T00:26:36.543201315Z" level=info msg="Container 35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:36.551555 containerd[1658]: time="2025-12-13T00:26:36.551513528Z" level=info msg="CreateContainer within sandbox \"1339d6fdd1e014edfc8dee847076a424268fe838ff9c1b995cca4a0a46f934ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab\"" Dec 13 00:26:36.552542 containerd[1658]: time="2025-12-13T00:26:36.552503175Z" level=info msg="StartContainer for \"35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab\"" Dec 13 00:26:36.553646 containerd[1658]: time="2025-12-13T00:26:36.553617124Z" level=info msg="connecting to shim 35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab" address="unix:///run/containerd/s/a01f43c61ac42707b9c09bef6b5500319600240b6d9282ca835630e227e5a306" protocol=ttrpc version=3 Dec 13 00:26:36.577601 systemd[1]: Started cri-containerd-35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab.scope - libcontainer container 35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab. Dec 13 00:26:36.597000 audit: BPF prog-id=220 op=LOAD Dec 13 00:26:36.597000 audit: BPF prog-id=221 op=LOAD Dec 13 00:26:36.597000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.597000 audit: BPF prog-id=221 op=UNLOAD Dec 13 00:26:36.597000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.597000 audit: BPF prog-id=222 op=LOAD Dec 13 00:26:36.597000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.597000 audit: BPF prog-id=223 op=LOAD Dec 13 00:26:36.597000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.598000 audit: BPF prog-id=223 op=UNLOAD Dec 13 00:26:36.598000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.598000 audit: BPF prog-id=222 op=UNLOAD Dec 13 00:26:36.598000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.598000 audit: BPF prog-id=224 op=LOAD Dec 13 00:26:36.598000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4647 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335626432303264303137623635643465393639616331323333643461 Dec 13 00:26:36.620311 containerd[1658]: time="2025-12-13T00:26:36.620195331Z" level=info msg="StartContainer for \"35bd202d017b65d4e969ac1233d4a8e3bd88a41f59bb9b3b1600bcc4a35f2aab\" returns successfully" Dec 13 00:26:36.690310 containerd[1658]: time="2025-12-13T00:26:36.690227707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6559dbc8c-wbknq,Uid:673f283b-d5b4-4c9a-b0fa-82c7c33a08c0,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:36.816155 systemd-networkd[1316]: cali752eb427a9b: Link UP Dec 13 00:26:36.817012 systemd-networkd[1316]: cali752eb427a9b: Gained carrier Dec 13 00:26:36.818122 containerd[1658]: time="2025-12-13T00:26:36.818084228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:36.820101 containerd[1658]: time="2025-12-13T00:26:36.820019368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:36.820213 containerd[1658]: time="2025-12-13T00:26:36.820097805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:36.820626 kubelet[2837]: E1213 00:26:36.820546 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:36.820746 kubelet[2837]: E1213 00:26:36.820643 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:36.820945 kubelet[2837]: E1213 00:26:36.820881 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:36.821383 kubelet[2837]: E1213 00:26:36.821064 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.734 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6559dbc8c--wbknq-eth0 whisker-6559dbc8c- calico-system 673f283b-d5b4-4c9a-b0fa-82c7c33a08c0 984 0 2025-12-13 00:26:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6559dbc8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6559dbc8c-wbknq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali752eb427a9b [] [] }} ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.734 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.767 [INFO][4803] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" HandleID="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Workload="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.767 [INFO][4803] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" HandleID="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Workload="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6559dbc8c-wbknq", "timestamp":"2025-12-13 00:26:36.767170767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.767 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.767 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.767 [INFO][4803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.778 [INFO][4803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.783 [INFO][4803] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.789 [INFO][4803] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.791 [INFO][4803] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.793 [INFO][4803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.793 [INFO][4803] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.796 [INFO][4803] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.800 [INFO][4803] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.809 [INFO][4803] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.809 [INFO][4803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" host="localhost" Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.809 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:36.834015 containerd[1658]: 2025-12-13 00:26:36.809 [INFO][4803] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" HandleID="k8s-pod-network.1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Workload="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.813 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6559dbc8c--wbknq-eth0", GenerateName:"whisker-6559dbc8c-", Namespace:"calico-system", SelfLink:"", UID:"673f283b-d5b4-4c9a-b0fa-82c7c33a08c0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6559dbc8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6559dbc8c-wbknq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali752eb427a9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.813 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.813 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali752eb427a9b ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.816 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.817 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6559dbc8c--wbknq-eth0", GenerateName:"whisker-6559dbc8c-", Namespace:"calico-system", SelfLink:"", UID:"673f283b-d5b4-4c9a-b0fa-82c7c33a08c0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6559dbc8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae", Pod:"whisker-6559dbc8c-wbknq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali752eb427a9b", MAC:"5e:d7:01:35:9c:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:36.835066 containerd[1658]: 2025-12-13 00:26:36.827 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" Namespace="calico-system" Pod="whisker-6559dbc8c-wbknq" WorkloadEndpoint="localhost-k8s-whisker--6559dbc8c--wbknq-eth0" Dec 13 00:26:36.851000 audit[4820]: NETFILTER_CFG table=filter:122 family=2 entries=67 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:36.851000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=38236 a0=3 a1=7ffcf0851840 a2=0 a3=7ffcf085182c items=0 ppid=4400 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.851000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:36.866104 containerd[1658]: time="2025-12-13T00:26:36.866042414Z" level=info msg="connecting to shim 1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae" address="unix:///run/containerd/s/a84f1b94bbdc8ab2cce3e5910675e946e0e579197a2855c1bd307063ecc8ddbc" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:36.903607 systemd[1]: Started cri-containerd-1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae.scope - libcontainer container 1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae. Dec 13 00:26:36.920000 audit: BPF prog-id=225 op=LOAD Dec 13 00:26:36.921000 audit: BPF prog-id=226 op=LOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=226 op=UNLOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=227 op=LOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=228 op=LOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=228 op=UNLOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=227 op=UNLOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.921000 audit: BPF prog-id=229 op=LOAD Dec 13 00:26:36.921000 audit[4842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4828 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:36.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131363464646132643136623366343332633566326331646339666331 Dec 13 00:26:36.923067 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:36.965270 containerd[1658]: time="2025-12-13T00:26:36.965190089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6559dbc8c-wbknq,Uid:673f283b-d5b4-4c9a-b0fa-82c7c33a08c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1164dda2d16b3f432c5f2c1dc9fc15c5b22886e94269e362c1bb1e6b0ad18bae\"" Dec 13 00:26:36.967338 containerd[1658]: time="2025-12-13T00:26:36.967283075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:26:36.996688 systemd-networkd[1316]: cali6e4faeae772: Gained IPv6LL Dec 13 00:26:37.001258 kubelet[2837]: E1213 00:26:37.001168 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:37.004960 kubelet[2837]: E1213 00:26:37.004888 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:37.015657 kubelet[2837]: I1213 00:26:37.015199 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2j2rk" podStartSLOduration=49.015182061 podStartE2EDuration="49.015182061s" podCreationTimestamp="2025-12-13 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:37.013650908 +0000 UTC m=+56.641187115" watchObservedRunningTime="2025-12-13 00:26:37.015182061 +0000 UTC m=+56.642718258" Dec 13 00:26:37.033000 audit[4869]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:37.033000 audit[4869]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd71520710 a2=0 a3=7ffd715206fc items=0 ppid=2990 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:37.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:37.043000 audit[4869]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:37.043000 audit[4869]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd71520710 a2=0 a3=0 items=0 ppid=2990 pid=4869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:37.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:37.312463 systemd-networkd[1316]: cali5e77ed0b5a8: Gained IPv6LL Dec 13 00:26:37.391255 containerd[1658]: time="2025-12-13T00:26:37.391163534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:37.523518 containerd[1658]: time="2025-12-13T00:26:37.523429441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:26:37.523709 containerd[1658]: time="2025-12-13T00:26:37.523534859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:37.523737 kubelet[2837]: E1213 00:26:37.523699 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:37.523775 kubelet[2837]: E1213 00:26:37.523745 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:37.523850 kubelet[2837]: E1213 00:26:37.523819 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:37.524782 containerd[1658]: time="2025-12-13T00:26:37.524725493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:26:37.888413 systemd-networkd[1316]: vxlan.calico: Gained IPv6LL Dec 13 00:26:37.953976 containerd[1658]: time="2025-12-13T00:26:37.953900181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:37.955582 containerd[1658]: time="2025-12-13T00:26:37.955539817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:26:37.955774 containerd[1658]: time="2025-12-13T00:26:37.955579231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:37.956779 kubelet[2837]: E1213 00:26:37.955939 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:37.956779 kubelet[2837]: E1213 00:26:37.956000 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:37.956779 kubelet[2837]: E1213 00:26:37.956083 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:37.956779 kubelet[2837]: E1213 00:26:37.956129 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:26:38.009489 kubelet[2837]: E1213 00:26:38.009389 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:38.012053 kubelet[2837]: E1213 00:26:38.011032 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:26:38.012907 kubelet[2837]: E1213 00:26:38.012875 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:38.076000 audit[4872]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.076000 audit[4872]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7ee03a00 a2=0 a3=7fff7ee039ec items=0 ppid=2990 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.082000 audit[4872]: NETFILTER_CFG table=nat:126 family=2 entries=35 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:38.082000 audit[4872]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff7ee03a00 a2=0 a3=7fff7ee039ec items=0 ppid=2990 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:38.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:38.849417 systemd-networkd[1316]: cali752eb427a9b: Gained IPv6LL Dec 13 00:26:39.011738 kubelet[2837]: E1213 00:26:39.011691 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:41.559914 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:60382.service - OpenSSH per-connection server daemon (10.0.0.1:60382). Dec 13 00:26:41.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.109:22-10.0.0.1:60382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:41.562186 kernel: kauditd_printk_skb: 309 callbacks suppressed Dec 13 00:26:41.562261 kernel: audit: type=1130 audit(1765585601.559:686): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.109:22-10.0.0.1:60382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:41.735000 audit[4887]: USER_ACCT pid=4887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.735669 sshd[4887]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:41.738600 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:41.747291 kernel: audit: type=1101 audit(1765585601.735:687): pid=4887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.747395 kernel: audit: type=1103 audit(1765585601.737:688): pid=4887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.737000 audit[4887]: CRED_ACQ pid=4887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.745866 systemd-logind[1630]: New session 9 of user core. Dec 13 00:26:41.751203 kernel: audit: type=1006 audit(1765585601.737:689): pid=4887 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 00:26:41.737000 audit[4887]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d367f0 a2=3 a3=0 items=0 ppid=1 pid=4887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:41.757253 kernel: audit: type=1300 audit(1765585601.737:689): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf1d367f0 a2=3 a3=0 items=0 ppid=1 pid=4887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:41.757337 kernel: audit: type=1327 audit(1765585601.737:689): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:41.737000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:41.758668 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 00:26:41.761000 audit[4887]: USER_START pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.764000 audit[4891]: CRED_ACQ pid=4891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.774474 kernel: audit: type=1105 audit(1765585601.761:690): pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.774635 kernel: audit: type=1103 audit(1765585601.764:691): pid=4891 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.978446 sshd[4891]: Connection closed by 10.0.0.1 port 60382 Dec 13 00:26:41.978667 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:41.979000 audit[4887]: USER_END pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.984806 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:60382.service: Deactivated successfully. Dec 13 00:26:41.987783 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 00:26:41.988286 kernel: audit: type=1106 audit(1765585601.979:692): pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.980000 audit[4887]: CRED_DISP pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:41.988872 systemd-logind[1630]: Session 9 logged out. Waiting for processes to exit. Dec 13 00:26:41.991194 systemd-logind[1630]: Removed session 9. Dec 13 00:26:41.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.109:22-10.0.0.1:60382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:41.994258 kernel: audit: type=1104 audit(1765585601.980:693): pid=4887 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:46.590908 containerd[1658]: time="2025-12-13T00:26:46.590852302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:46.993060 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:60386.service - OpenSSH per-connection server daemon (10.0.0.1:60386). Dec 13 00:26:46.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.109:22-10.0.0.1:60386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:46.994849 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:26:46.994924 kernel: audit: type=1130 audit(1765585606.991:695): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.109:22-10.0.0.1:60386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:47.061000 audit[4948]: USER_ACCT pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.062621 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 60386 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:47.064928 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:47.061000 audit[4948]: CRED_ACQ pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.070908 systemd-logind[1630]: New session 10 of user core. Dec 13 00:26:47.077142 kernel: audit: type=1101 audit(1765585607.061:696): pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.077270 kernel: audit: type=1103 audit(1765585607.061:697): pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.061000 audit[4948]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6f772d60 a2=3 a3=0 items=0 ppid=1 pid=4948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.089263 kernel: audit: type=1006 audit(1765585607.061:698): pid=4948 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 00:26:47.089317 kernel: audit: type=1300 audit(1765585607.061:698): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6f772d60 a2=3 a3=0 items=0 ppid=1 pid=4948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.089354 kernel: audit: type=1327 audit(1765585607.061:698): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:47.061000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:47.093546 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 00:26:47.095000 audit[4948]: USER_START pid=4948 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.097000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.108640 kernel: audit: type=1105 audit(1765585607.095:699): pid=4948 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.108853 kernel: audit: type=1103 audit(1765585607.097:700): pid=4952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.200265 systemd-networkd[1316]: cali1148dfb3016: Link UP Dec 13 00:26:47.201924 systemd-networkd[1316]: cali1148dfb3016: Gained carrier Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.907 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8pkv7-eth0 csi-node-driver- calico-system 7c357da7-f81d-4093-8d71-96d21eb95cdd 745 0 2025-12-13 00:26:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8pkv7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1148dfb3016 [] [] }} ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.907 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.936 [INFO][4939] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" HandleID="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Workload="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.936 [INFO][4939] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" HandleID="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Workload="localhost-k8s-csi--node--driver--8pkv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000580a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8pkv7", "timestamp":"2025-12-13 00:26:46.936707468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.936 [INFO][4939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.936 [INFO][4939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.937 [INFO][4939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.971 [INFO][4939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.977 [INFO][4939] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:46.982 [INFO][4939] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.041 [INFO][4939] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.044 [INFO][4939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.044 [INFO][4939] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.045 [INFO][4939] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.071 [INFO][4939] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.190 [INFO][4939] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.190 [INFO][4939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" host="localhost" Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.190 [INFO][4939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:47.232881 containerd[1658]: 2025-12-13 00:26:47.190 [INFO][4939] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" HandleID="k8s-pod-network.07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Workload="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.194 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pkv7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c357da7-f81d-4093-8d71-96d21eb95cdd", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8pkv7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1148dfb3016", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.194 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.194 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1148dfb3016 ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.204 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.207 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8pkv7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c357da7-f81d-4093-8d71-96d21eb95cdd", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d", Pod:"csi-node-driver-8pkv7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1148dfb3016", MAC:"46:f1:8d:4e:44:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:47.234252 containerd[1658]: 2025-12-13 00:26:47.227 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" Namespace="calico-system" Pod="csi-node-driver-8pkv7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8pkv7-eth0" Dec 13 00:26:47.243000 audit[4974]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4974 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:47.248264 kernel: audit: type=1325 audit(1765585607.243:701): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4974 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:47.243000 audit[4974]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff1e1f7e60 a2=0 a3=7fff1e1f7e4c items=0 ppid=4400 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.243000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:47.258917 kernel: audit: type=1300 audit(1765585607.243:701): arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff1e1f7e60 a2=0 a3=7fff1e1f7e4c items=0 ppid=4400 pid=4974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.284362 containerd[1658]: time="2025-12-13T00:26:47.284290395Z" level=info msg="connecting to shim 07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d" address="unix:///run/containerd/s/9b7f2d12dd9201b383eea2073620f6ac619dc44ad73790bf85438f06e74762e2" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:47.288667 sshd[4952]: Connection closed by 10.0.0.1 port 60386 Dec 13 00:26:47.287674 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:47.288000 audit[4948]: USER_END pid=4948 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.288000 audit[4948]: CRED_DISP pid=4948 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:47.293565 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:60386.service: Deactivated successfully. Dec 13 00:26:47.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.109:22-10.0.0.1:60386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:47.305047 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 00:26:47.310297 systemd-logind[1630]: Session 10 logged out. Waiting for processes to exit. Dec 13 00:26:47.312954 systemd-logind[1630]: Removed session 10. Dec 13 00:26:47.328524 systemd[1]: Started cri-containerd-07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d.scope - libcontainer container 07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d. Dec 13 00:26:47.343000 audit: BPF prog-id=230 op=LOAD Dec 13 00:26:47.344000 audit: BPF prog-id=231 op=LOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=231 op=UNLOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=232 op=LOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=233 op=LOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=233 op=UNLOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=232 op=UNLOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.344000 audit: BPF prog-id=234 op=LOAD Dec 13 00:26:47.344000 audit[4997]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4984 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037623065656438343736383138646563346238383266353632306362 Dec 13 00:26:47.347879 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:47.371255 containerd[1658]: time="2025-12-13T00:26:47.371191303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pkv7,Uid:7c357da7-f81d-4093-8d71-96d21eb95cdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"07b0eed8476818dec4b882f5620cb64456fe941a92d51e7a0d1cbb9238ca027d\"" Dec 13 00:26:47.375582 containerd[1658]: time="2025-12-13T00:26:47.375536257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:26:47.464818 containerd[1658]: time="2025-12-13T00:26:47.464726106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:47.604388 systemd-networkd[1316]: calie119dbaa574: Link UP Dec 13 00:26:47.604668 systemd-networkd[1316]: calie119dbaa574: Gained carrier Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.508 [INFO][5025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0 calico-apiserver-8d68b8c7b- calico-apiserver 06395f50-a88f-48a6-b5f1-47617410b0b2 877 0 2025-12-13 00:26:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d68b8c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d68b8c7b-pl252 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie119dbaa574 [] [] }} ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.508 [INFO][5025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.541 [INFO][5039] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" HandleID="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.541 [INFO][5039] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" HandleID="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d68b8c7b-pl252", "timestamp":"2025-12-13 00:26:47.541730572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.541 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.542 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.542 [INFO][5039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.550 [INFO][5039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.556 [INFO][5039] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.565 [INFO][5039] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.573 [INFO][5039] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.577 [INFO][5039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.577 [INFO][5039] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.580 [INFO][5039] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352 Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.588 [INFO][5039] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.596 [INFO][5039] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.596 [INFO][5039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" host="localhost" Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.596 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:47.622263 containerd[1658]: 2025-12-13 00:26:47.596 [INFO][5039] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" HandleID="k8s-pod-network.f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Workload="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.601 [INFO][5025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0", GenerateName:"calico-apiserver-8d68b8c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06395f50-a88f-48a6-b5f1-47617410b0b2", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d68b8c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d68b8c7b-pl252", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie119dbaa574", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.601 [INFO][5025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.601 [INFO][5025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie119dbaa574 ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.604 [INFO][5025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.604 [INFO][5025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0", GenerateName:"calico-apiserver-8d68b8c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"06395f50-a88f-48a6-b5f1-47617410b0b2", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d68b8c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352", Pod:"calico-apiserver-8d68b8c7b-pl252", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie119dbaa574", MAC:"9e:89:97:44:5c:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:47.623167 containerd[1658]: 2025-12-13 00:26:47.617 [INFO][5025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" Namespace="calico-apiserver" Pod="calico-apiserver-8d68b8c7b-pl252" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d68b8c7b--pl252-eth0" Dec 13 00:26:47.635000 audit[5055]: NETFILTER_CFG table=filter:128 family=2 entries=49 op=nft_register_chain pid=5055 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:47.635000 audit[5055]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7fffec393c50 a2=0 a3=7fffec393c3c items=0 ppid=4400 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.635000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:47.653817 containerd[1658]: time="2025-12-13T00:26:47.653741601Z" level=info msg="connecting to shim f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352" address="unix:///run/containerd/s/d3bb29bbd61f32cefec66405c2984dae6db58bd0c6891bfce4f0f06d8858a261" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:47.693895 systemd[1]: Started cri-containerd-f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352.scope - libcontainer container f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352. Dec 13 00:26:47.709000 audit: BPF prog-id=235 op=LOAD Dec 13 00:26:47.709000 audit: BPF prog-id=236 op=LOAD Dec 13 00:26:47.709000 audit[5076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=236 op=UNLOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=237 op=LOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=238 op=LOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=238 op=UNLOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=237 op=UNLOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.710000 audit: BPF prog-id=239 op=LOAD Dec 13 00:26:47.710000 audit[5076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5064 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:47.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630346162646539623230643234346137323738626531333133643931 Dec 13 00:26:47.712598 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:47.736697 containerd[1658]: time="2025-12-13T00:26:47.736610499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:47.738550 containerd[1658]: time="2025-12-13T00:26:47.738472567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:26:47.738694 containerd[1658]: time="2025-12-13T00:26:47.738501733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:47.738809 kubelet[2837]: E1213 00:26:47.738753 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:26:47.739201 kubelet[2837]: E1213 00:26:47.738822 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:26:47.739201 kubelet[2837]: E1213 00:26:47.739048 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:47.740658 containerd[1658]: time="2025-12-13T00:26:47.740627931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:26:47.754716 containerd[1658]: time="2025-12-13T00:26:47.754655624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d68b8c7b-pl252,Uid:06395f50-a88f-48a6-b5f1-47617410b0b2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f04abde9b20d244a7278be1313d9184fe4b63edc409282f6c52c7ff97f738352\"" Dec 13 00:26:48.072663 containerd[1658]: time="2025-12-13T00:26:48.072577537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:48.139328 containerd[1658]: time="2025-12-13T00:26:48.139200033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:48.139328 containerd[1658]: time="2025-12-13T00:26:48.139283073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:26:48.139709 kubelet[2837]: E1213 00:26:48.139654 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:26:48.139801 kubelet[2837]: E1213 00:26:48.139709 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:26:48.140037 kubelet[2837]: E1213 00:26:48.139982 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:48.140120 kubelet[2837]: E1213 00:26:48.140074 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:48.140214 containerd[1658]: time="2025-12-13T00:26:48.140053596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:48.463839 kubelet[2837]: E1213 00:26:48.463675 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:48.463991 containerd[1658]: time="2025-12-13T00:26:48.463952823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,}" Dec 13 00:26:48.467115 containerd[1658]: time="2025-12-13T00:26:48.467057812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:48.469426 containerd[1658]: time="2025-12-13T00:26:48.469080848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:26:48.471890 containerd[1658]: time="2025-12-13T00:26:48.471851143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:48.472822 containerd[1658]: time="2025-12-13T00:26:48.472753872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,}" Dec 13 00:26:48.476367 containerd[1658]: time="2025-12-13T00:26:48.476313706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:48.476621 containerd[1658]: time="2025-12-13T00:26:48.476416284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:48.476671 kubelet[2837]: E1213 00:26:48.476577 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:48.476671 kubelet[2837]: E1213 00:26:48.476621 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:48.476755 kubelet[2837]: E1213 00:26:48.476692 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:48.476755 kubelet[2837]: E1213 00:26:48.476723 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:26:48.576636 systemd-networkd[1316]: cali1148dfb3016: Gained IPv6LL Dec 13 00:26:48.666771 systemd-networkd[1316]: calia90dd086acb: Link UP Dec 13 00:26:48.667067 systemd-networkd[1316]: calia90dd086acb: Gained carrier Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.545 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--9z4hg-eth0 coredns-66bc5c9577- kube-system b920e14f-b711-410a-9f57-a0f3b9193e31 879 0 2025-12-13 00:25:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-9z4hg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia90dd086acb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.546 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.605 [INFO][5164] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" HandleID="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Workload="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.605 [INFO][5164] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" HandleID="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Workload="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-9z4hg", "timestamp":"2025-12-13 00:26:48.605286992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.605 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.605 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.605 [INFO][5164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.620 [INFO][5164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.627 [INFO][5164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.636 [INFO][5164] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.638 [INFO][5164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.641 [INFO][5164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.641 [INFO][5164] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.643 [INFO][5164] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675 Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.648 [INFO][5164] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.655 [INFO][5164] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.655 [INFO][5164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" host="localhost" Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.655 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:48.683815 containerd[1658]: 2025-12-13 00:26:48.655 [INFO][5164] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" HandleID="k8s-pod-network.dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Workload="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.660 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9z4hg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b920e14f-b711-410a-9f57-a0f3b9193e31", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-9z4hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia90dd086acb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.662 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.662 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia90dd086acb ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.665 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.665 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9z4hg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b920e14f-b711-410a-9f57-a0f3b9193e31", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675", Pod:"coredns-66bc5c9577-9z4hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia90dd086acb", MAC:"82:ff:43:31:4b:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:48.684958 containerd[1658]: 2025-12-13 00:26:48.681 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" Namespace="kube-system" Pod="coredns-66bc5c9577-9z4hg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9z4hg-eth0" Dec 13 00:26:48.699000 audit[5210]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=5210 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:48.699000 audit[5210]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7ffe11ad0120 a2=0 a3=7ffe11ad010c items=0 ppid=4400 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.699000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:48.715284 containerd[1658]: time="2025-12-13T00:26:48.715114489Z" level=info msg="connecting to shim dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675" address="unix:///run/containerd/s/2b346704ea68edfe2ac46cfea79fe33814d52cd4c94b74963eccac60cf4691c2" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:48.746535 systemd[1]: Started cri-containerd-dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675.scope - libcontainer container dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675. Dec 13 00:26:48.763000 audit: BPF prog-id=240 op=LOAD Dec 13 00:26:48.764000 audit: BPF prog-id=241 op=LOAD Dec 13 00:26:48.764000 audit[5231]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.764000 audit: BPF prog-id=241 op=UNLOAD Dec 13 00:26:48.764000 audit[5231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.764000 audit: BPF prog-id=242 op=LOAD Dec 13 00:26:48.764000 audit[5231]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.764000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.765000 audit: BPF prog-id=243 op=LOAD Dec 13 00:26:48.765000 audit[5231]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.765000 audit: BPF prog-id=243 op=UNLOAD Dec 13 00:26:48.765000 audit[5231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.765000 audit: BPF prog-id=242 op=UNLOAD Dec 13 00:26:48.765000 audit[5231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.765000 audit: BPF prog-id=244 op=LOAD Dec 13 00:26:48.765000 audit[5231]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5219 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:48.765000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466656364636361336239346163643134333834356135383132396332 Dec 13 00:26:48.768539 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:49.024522 containerd[1658]: time="2025-12-13T00:26:49.024464934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9z4hg,Uid:b920e14f-b711-410a-9f57-a0f3b9193e31,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675\"" Dec 13 00:26:49.025344 kubelet[2837]: E1213 00:26:49.025314 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:49.035657 kubelet[2837]: E1213 00:26:49.035608 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:26:49.035899 kubelet[2837]: E1213 00:26:49.035864 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:26:49.242292 containerd[1658]: time="2025-12-13T00:26:49.242215633Z" level=info msg="CreateContainer within sandbox \"dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:26:49.344436 systemd-networkd[1316]: calie119dbaa574: Gained IPv6LL Dec 13 00:26:49.889027 containerd[1658]: time="2025-12-13T00:26:49.888969765Z" level=info msg="Container 773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:26:50.304460 systemd-networkd[1316]: calia90dd086acb: Gained IPv6LL Dec 13 00:26:50.462019 containerd[1658]: time="2025-12-13T00:26:50.461973244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:50.563175 containerd[1658]: time="2025-12-13T00:26:50.563061479Z" level=info msg="CreateContainer within sandbox \"dfecdcca3b94acd143845a58129c2b24f01c4d9f0e76dde37a89d038bc498675\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7\"" Dec 13 00:26:50.563886 containerd[1658]: time="2025-12-13T00:26:50.563825589Z" level=info msg="StartContainer for \"773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7\"" Dec 13 00:26:50.564837 containerd[1658]: time="2025-12-13T00:26:50.564789042Z" level=info msg="connecting to shim 773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7" address="unix:///run/containerd/s/2b346704ea68edfe2ac46cfea79fe33814d52cd4c94b74963eccac60cf4691c2" protocol=ttrpc version=3 Dec 13 00:26:50.600444 systemd[1]: Started cri-containerd-773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7.scope - libcontainer container 773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7. Dec 13 00:26:50.613000 audit: BPF prog-id=245 op=LOAD Dec 13 00:26:50.614000 audit: BPF prog-id=246 op=LOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=246 op=UNLOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=247 op=LOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=248 op=LOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=248 op=UNLOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=247 op=UNLOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.614000 audit: BPF prog-id=249 op=LOAD Dec 13 00:26:50.614000 audit[5258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5219 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737336661393930666563393766373031633761383361363966626637 Dec 13 00:26:50.752772 containerd[1658]: time="2025-12-13T00:26:50.752724168Z" level=info msg="StartContainer for \"773fa990fec97f701c7a83a69fbf74fc440195daf622281b2dba7f7395135ba7\" returns successfully" Dec 13 00:26:50.792988 systemd-networkd[1316]: cali2541ad13b17: Link UP Dec 13 00:26:50.793982 systemd-networkd[1316]: cali2541ad13b17: Gained carrier Dec 13 00:26:50.797000 audit[5294]: NETFILTER_CFG table=filter:130 family=2 entries=14 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:50.797000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd969f1b0 a2=0 a3=7ffcd969f19c items=0 ppid=2990 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.797000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:50.804000 audit[5294]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:50.804000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcd969f1b0 a2=0 a3=7ffcd969f19c items=0 ppid=2990 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:50.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:50.962603 containerd[1658]: time="2025-12-13T00:26:50.962482409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:51.040411 kubelet[2837]: E1213 00:26:51.040307 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:51.095417 containerd[1658]: time="2025-12-13T00:26:51.095355959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:51.095574 containerd[1658]: time="2025-12-13T00:26:51.095471762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:51.095637 kubelet[2837]: E1213 00:26:51.095585 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:51.095740 kubelet[2837]: E1213 00:26:51.095636 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:51.095740 kubelet[2837]: E1213 00:26:51.095722 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:51.095816 kubelet[2837]: E1213 00:26:51.095755 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.556 [INFO][5130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--6qgdw-eth0 goldmane-7c778bb748- calico-system 221701a5-b818-49d6-9c29-c4e060d651fd 875 0 2025-12-13 00:26:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-6qgdw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2541ad13b17 [] [] }} ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.556 [INFO][5130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.613 [INFO][5167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" HandleID="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Workload="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.614 [INFO][5167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" HandleID="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Workload="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-6qgdw", "timestamp":"2025-12-13 00:26:48.613888226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.614 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.655 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.656 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.720 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:48.727 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:49.199 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:49.576 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.208 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.209 [INFO][5167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.210 [INFO][5167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.497 [INFO][5167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.783 [INFO][5167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.783 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" host="localhost" Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.784 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:51.158566 containerd[1658]: 2025-12-13 00:26:50.784 [INFO][5167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" HandleID="k8s-pod-network.763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Workload="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:50.789 [INFO][5130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--6qgdw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"221701a5-b818-49d6-9c29-c4e060d651fd", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-6qgdw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2541ad13b17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:50.789 [INFO][5130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:50.789 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2541ad13b17 ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:50.794 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:50.794 [INFO][5130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--6qgdw-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"221701a5-b818-49d6-9c29-c4e060d651fd", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc", Pod:"goldmane-7c778bb748-6qgdw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2541ad13b17", MAC:"ce:8c:87:97:ea:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.159258 containerd[1658]: 2025-12-13 00:26:51.155 [INFO][5130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgdw" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--6qgdw-eth0" Dec 13 00:26:51.172000 audit[5303]: NETFILTER_CFG table=filter:132 family=2 entries=64 op=nft_register_chain pid=5303 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:51.172000 audit[5303]: SYSCALL arch=c000003e syscall=46 success=yes exit=31120 a0=3 a1=7ffc359a48f0 a2=0 a3=7ffc359a48dc items=0 ppid=4400 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.172000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:51.234137 kubelet[2837]: I1213 00:26:51.233681 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9z4hg" podStartSLOduration=63.233663834 podStartE2EDuration="1m3.233663834s" podCreationTimestamp="2025-12-13 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:26:51.233181025 +0000 UTC m=+70.860717232" watchObservedRunningTime="2025-12-13 00:26:51.233663834 +0000 UTC m=+70.861200041" Dec 13 00:26:51.455194 containerd[1658]: time="2025-12-13T00:26:51.455152781Z" level=info msg="connecting to shim 763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc" address="unix:///run/containerd/s/81ad657f1760801d7467731c4dd940ab11a1e59bb1bc802022c17e0357ac2c9e" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:51.465471 containerd[1658]: time="2025-12-13T00:26:51.465407238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:26:51.484430 systemd[1]: Started cri-containerd-763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc.scope - libcontainer container 763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc. Dec 13 00:26:51.496000 audit: BPF prog-id=250 op=LOAD Dec 13 00:26:51.496000 audit: BPF prog-id=251 op=LOAD Dec 13 00:26:51.496000 audit[5324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=251 op=UNLOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=252 op=LOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=253 op=LOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=253 op=UNLOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=252 op=UNLOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.497000 audit: BPF prog-id=254 op=LOAD Dec 13 00:26:51.497000 audit[5324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5313 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736336237336135383433313266326632373962363264326139616264 Dec 13 00:26:51.499428 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:51.573517 containerd[1658]: time="2025-12-13T00:26:51.573444529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgdw,Uid:221701a5-b818-49d6-9c29-c4e060d651fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"763b73a584312f2f279b62d2a9abdf223384b5fcc5c0c925207bf8ba4e799fcc\"" Dec 13 00:26:51.573833 systemd-networkd[1316]: cali39883d71fce: Link UP Dec 13 00:26:51.575093 systemd-networkd[1316]: cali39883d71fce: Gained carrier Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:48.566 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0 calico-kube-controllers-6dc9f4c9d- calico-system ebf778c3-930a-43a3-9210-8534e588628e 872 0 2025-12-13 00:26:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc9f4c9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dc9f4c9d-8t67s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali39883d71fce [] [] }} ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:48.568 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:48.621 [INFO][5179] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" HandleID="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Workload="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:48.622 [INFO][5179] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" HandleID="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Workload="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dc9f4c9d-8t67s", "timestamp":"2025-12-13 00:26:48.621895495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:48.622 [INFO][5179] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:50.784 [INFO][5179] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:50.784 [INFO][5179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:50.817 [INFO][5179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.169 [INFO][5179] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.344 [INFO][5179] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.360 [INFO][5179] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.378 [INFO][5179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.378 [INFO][5179] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.387 [INFO][5179] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3 Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.516 [INFO][5179] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5179] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" host="localhost" Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5179] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:51.652477 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5179] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" HandleID="k8s-pod-network.22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Workload="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.571 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0", GenerateName:"calico-kube-controllers-6dc9f4c9d-", Namespace:"calico-system", SelfLink:"", UID:"ebf778c3-930a-43a3-9210-8534e588628e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc9f4c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dc9f4c9d-8t67s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39883d71fce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.571 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.571 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39883d71fce ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.574 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.575 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0", GenerateName:"calico-kube-controllers-6dc9f4c9d-", Namespace:"calico-system", SelfLink:"", UID:"ebf778c3-930a-43a3-9210-8534e588628e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc9f4c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3", Pod:"calico-kube-controllers-6dc9f4c9d-8t67s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39883d71fce", MAC:"b6:a5:79:20:46:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.653375 containerd[1658]: 2025-12-13 00:26:51.649 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" Namespace="calico-system" Pod="calico-kube-controllers-6dc9f4c9d-8t67s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc9f4c9d--8t67s-eth0" Dec 13 00:26:51.671000 audit[5359]: NETFILTER_CFG table=filter:133 family=2 entries=60 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:51.671000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7fff7593e1e0 a2=0 a3=7fff7593e1cc items=0 ppid=4400 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.671000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:51.750641 systemd-networkd[1316]: calibf3e80a307e: Link UP Dec 13 00:26:51.750828 systemd-networkd[1316]: calibf3e80a307e: Gained carrier Dec 13 00:26:51.828000 audit[5362]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:51.828000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdf3458200 a2=0 a3=7ffdf34581ec items=0 ppid=2990 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:51.855000 audit[5362]: NETFILTER_CFG table=nat:135 family=2 entries=56 op=nft_register_chain pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:51.855000 audit[5362]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdf3458200 a2=0 a3=7ffdf34581ec items=0 ppid=2990 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:48.562 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0 calico-apiserver-6f7d758f46- calico-apiserver 4deb6945-66eb-45de-ac81-4441491473f3 876 0 2025-12-13 00:26:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7d758f46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f7d758f46-d75hr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf3e80a307e [] [] }} ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:48.562 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:48.625 [INFO][5187] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" HandleID="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Workload="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:48.626 [INFO][5187] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" HandleID="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Workload="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f7d758f46-d75hr", "timestamp":"2025-12-13 00:26:48.625833008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:48.626 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.568 [INFO][5187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.600 [INFO][5187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.653 [INFO][5187] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.661 [INFO][5187] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.664 [INFO][5187] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.667 [INFO][5187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.667 [INFO][5187] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.669 [INFO][5187] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208 Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.690 [INFO][5187] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.743 [INFO][5187] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.743 [INFO][5187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" host="localhost" Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.743 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:26:51.864411 containerd[1658]: 2025-12-13 00:26:51.743 [INFO][5187] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" HandleID="k8s-pod-network.831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Workload="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.747 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0", GenerateName:"calico-apiserver-6f7d758f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4deb6945-66eb-45de-ac81-4441491473f3", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d758f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f7d758f46-d75hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf3e80a307e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.747 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.747 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf3e80a307e ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.750 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.750 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0", GenerateName:"calico-apiserver-6f7d758f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"4deb6945-66eb-45de-ac81-4441491473f3", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7d758f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208", Pod:"calico-apiserver-6f7d758f46-d75hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf3e80a307e", MAC:"ba:46:51:f7:d0:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:26:51.865004 containerd[1658]: 2025-12-13 00:26:51.860 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" Namespace="calico-apiserver" Pod="calico-apiserver-6f7d758f46-d75hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f7d758f46--d75hr-eth0" Dec 13 00:26:51.865446 containerd[1658]: time="2025-12-13T00:26:51.865384476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:51.880304 containerd[1658]: time="2025-12-13T00:26:51.880211891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:26:51.880543 containerd[1658]: time="2025-12-13T00:26:51.880309358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:51.880844 kubelet[2837]: E1213 00:26:51.880788 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:51.880951 kubelet[2837]: E1213 00:26:51.880875 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:26:51.881185 kubelet[2837]: E1213 00:26:51.881140 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:51.882550 containerd[1658]: time="2025-12-13T00:26:51.882361792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:26:51.880000 audit[5371]: NETFILTER_CFG table=filter:136 family=2 entries=65 op=nft_register_chain pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:26:51.880000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=30204 a0=3 a1=7ffe27d95150 a2=0 a3=7ffe27d9513c items=0 ppid=4400 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.880000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:26:51.904186 containerd[1658]: time="2025-12-13T00:26:51.903444869Z" level=info msg="connecting to shim 22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3" address="unix:///run/containerd/s/154ef375d6420b24a052ca88b68c1ef46c72aca877373b7e59af5b374bc4165b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:51.923023 containerd[1658]: time="2025-12-13T00:26:51.922896540Z" level=info msg="connecting to shim 831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208" address="unix:///run/containerd/s/dcfc7af2784116d961d3c845b2536b9084d8dc235867b2a652f496563b9be984" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:26:51.941442 systemd[1]: Started cri-containerd-22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3.scope - libcontainer container 22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3. Dec 13 00:26:51.958471 systemd[1]: Started cri-containerd-831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208.scope - libcontainer container 831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208. Dec 13 00:26:51.963000 audit: BPF prog-id=255 op=LOAD Dec 13 00:26:51.964000 audit: BPF prog-id=256 op=LOAD Dec 13 00:26:51.964000 audit[5399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.964000 audit: BPF prog-id=256 op=UNLOAD Dec 13 00:26:51.964000 audit[5399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.964000 audit: BPF prog-id=257 op=LOAD Dec 13 00:26:51.964000 audit[5399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.964000 audit: BPF prog-id=258 op=LOAD Dec 13 00:26:51.964000 audit[5399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.965000 audit: BPF prog-id=258 op=UNLOAD Dec 13 00:26:51.965000 audit[5399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.965000 audit: BPF prog-id=257 op=UNLOAD Dec 13 00:26:51.965000 audit[5399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.965000 audit: BPF prog-id=259 op=LOAD Dec 13 00:26:51.965000 audit[5399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5380 pid=5399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232633232333630623330623935373738616337623838316234363361 Dec 13 00:26:51.967793 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:51.985000 audit: BPF prog-id=260 op=LOAD Dec 13 00:26:51.986000 audit: BPF prog-id=261 op=LOAD Dec 13 00:26:51.986000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=261 op=UNLOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=262 op=LOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=263 op=LOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=263 op=UNLOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=262 op=UNLOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.987000 audit: BPF prog-id=264 op=LOAD Dec 13 00:26:51.987000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5400 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:51.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833316464313435623562363265653136383861393335346334633664 Dec 13 00:26:51.990256 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:26:52.028530 containerd[1658]: time="2025-12-13T00:26:52.028430918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc9f4c9d-8t67s,Uid:ebf778c3-930a-43a3-9210-8534e588628e,Namespace:calico-system,Attempt:0,} returns sandbox id \"22c22360b30b95778ac7b881b463a876098852a1007abc984cbbdee1c4c492b3\"" Dec 13 00:26:52.039185 containerd[1658]: time="2025-12-13T00:26:52.039147619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7d758f46-d75hr,Uid:4deb6945-66eb-45de-ac81-4441491473f3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"831dd145b5b62ee1688a9354c4c6dd53c04c906f9b9aad10aee97af0f43b5208\"" Dec 13 00:26:52.045893 kubelet[2837]: E1213 00:26:52.045831 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:52.266678 containerd[1658]: time="2025-12-13T00:26:52.266579757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:52.304124 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Dec 13 00:26:52.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.109:22-10.0.0.1:57794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:52.306055 kernel: kauditd_printk_skb: 185 callbacks suppressed Dec 13 00:26:52.306205 kernel: audit: type=1130 audit(1765585612.302:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.109:22-10.0.0.1:57794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:52.389000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.391354 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:52.394434 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:52.389000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.401453 systemd-logind[1630]: New session 11 of user core. Dec 13 00:26:52.402989 kernel: audit: type=1101 audit(1765585612.389:771): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.403067 kernel: audit: type=1103 audit(1765585612.389:772): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.403108 kernel: audit: type=1006 audit(1765585612.389:773): pid=5461 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 00:26:52.404981 containerd[1658]: time="2025-12-13T00:26:52.404857262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:52.404981 containerd[1658]: time="2025-12-13T00:26:52.404927316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:26:52.405271 kubelet[2837]: E1213 00:26:52.405210 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:26:52.405352 kubelet[2837]: E1213 00:26:52.405273 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:26:52.405602 kubelet[2837]: E1213 00:26:52.405486 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:52.405713 containerd[1658]: time="2025-12-13T00:26:52.405641508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:26:52.407342 kernel: audit: type=1300 audit(1765585612.389:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe188b1370 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.389000 audit[5461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe188b1370 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:52.407508 kubelet[2837]: E1213 00:26:52.405782 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:26:52.412008 kernel: audit: type=1327 audit(1765585612.389:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:52.389000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:52.412501 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 00:26:52.414000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.463700 kernel: audit: type=1105 audit(1765585612.414:774): pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.463790 kernel: audit: type=1103 audit(1765585612.416:775): pid=5465 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.416000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.463309 systemd-networkd[1316]: cali2541ad13b17: Gained IPv6LL Dec 13 00:26:52.719529 sshd[5465]: Connection closed by 10.0.0.1 port 57794 Dec 13 00:26:52.719769 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:52.721000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.727361 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:57794.service: Deactivated successfully. Dec 13 00:26:52.729293 kernel: audit: type=1106 audit(1765585612.721:776): pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.721000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.729696 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 00:26:52.730766 systemd-logind[1630]: Session 11 logged out. Waiting for processes to exit. Dec 13 00:26:52.732714 systemd-logind[1630]: Removed session 11. Dec 13 00:26:52.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.109:22-10.0.0.1:57794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:52.735265 kernel: audit: type=1104 audit(1765585612.721:777): pid=5461 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:52.793838 containerd[1658]: time="2025-12-13T00:26:52.793732895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:52.829632 containerd[1658]: time="2025-12-13T00:26:52.829554531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:52.833770 containerd[1658]: time="2025-12-13T00:26:52.833637622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:26:52.834218 kubelet[2837]: E1213 00:26:52.834130 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:52.834218 kubelet[2837]: E1213 00:26:52.834219 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:26:52.834614 kubelet[2837]: E1213 00:26:52.834490 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:52.834614 kubelet[2837]: E1213 00:26:52.834570 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:26:52.834735 containerd[1658]: time="2025-12-13T00:26:52.834630188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:26:53.048162 kubelet[2837]: E1213 00:26:53.047907 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:26:53.048900 kubelet[2837]: E1213 00:26:53.048858 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:26:53.084000 audit[5480]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:53.084000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe38890270 a2=0 a3=7ffe3889025c items=0 ppid=2990 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:53.094000 audit[5480]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:53.094000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe38890270 a2=0 a3=7ffe3889025c items=0 ppid=2990 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:53.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:53.234468 containerd[1658]: time="2025-12-13T00:26:53.234391398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:53.402968 containerd[1658]: time="2025-12-13T00:26:53.402748939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:53.402968 containerd[1658]: time="2025-12-13T00:26:53.402798775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:26:53.403198 kubelet[2837]: E1213 00:26:53.403129 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:26:53.403198 kubelet[2837]: E1213 00:26:53.403185 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:26:53.403934 kubelet[2837]: E1213 00:26:53.403790 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:53.404198 kubelet[2837]: E1213 00:26:53.404029 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:26:53.404376 containerd[1658]: time="2025-12-13T00:26:53.404009128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:26:53.442925 systemd-networkd[1316]: cali39883d71fce: Gained IPv6LL Dec 13 00:26:53.504404 systemd-networkd[1316]: calibf3e80a307e: Gained IPv6LL Dec 13 00:26:53.882528 containerd[1658]: time="2025-12-13T00:26:53.882462914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:26:53.883846 containerd[1658]: time="2025-12-13T00:26:53.883794399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:26:53.883960 containerd[1658]: time="2025-12-13T00:26:53.883886746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:26:53.884130 kubelet[2837]: E1213 00:26:53.884030 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:53.884130 kubelet[2837]: E1213 00:26:53.884094 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:26:53.884222 kubelet[2837]: E1213 00:26:53.884201 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:26:53.884306 kubelet[2837]: E1213 00:26:53.884272 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:26:54.051163 kubelet[2837]: E1213 00:26:54.050841 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:26:54.051163 kubelet[2837]: E1213 00:26:54.050864 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:26:54.123000 audit[5482]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:54.123000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7cf55310 a2=0 a3=7fff7cf552fc items=0 ppid=2990 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:54.147000 audit[5482]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:26:54.147000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7cf55310 a2=0 a3=7fff7cf552fc items=0 ppid=2990 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:54.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:26:57.737579 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:57808.service - OpenSSH per-connection server daemon (10.0.0.1:57808). Dec 13 00:26:57.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.109:22-10.0.0.1:57808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:57.748260 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 13 00:26:57.748353 kernel: audit: type=1130 audit(1765585617.736:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.109:22-10.0.0.1:57808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:26:57.808000 audit[5492]: USER_ACCT pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.809607 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 57808 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:26:57.811914 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:26:57.809000 audit[5492]: CRED_ACQ pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.816281 systemd-logind[1630]: New session 12 of user core. Dec 13 00:26:57.818831 kernel: audit: type=1101 audit(1765585617.808:784): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.818891 kernel: audit: type=1103 audit(1765585617.809:785): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.818907 kernel: audit: type=1006 audit(1765585617.809:786): pid=5492 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 00:26:57.809000 audit[5492]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6b9a76e0 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.853816 kernel: audit: type=1300 audit(1765585617.809:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6b9a76e0 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:26:57.853906 kernel: audit: type=1327 audit(1765585617.809:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:57.809000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:26:57.862465 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 00:26:57.864000 audit[5492]: USER_START pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.866000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.888349 kernel: audit: type=1105 audit(1765585617.864:787): pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:57.888435 kernel: audit: type=1103 audit(1765585617.866:788): pid=5496 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.022336 sshd[5496]: Connection closed by 10.0.0.1 port 57808 Dec 13 00:26:58.022610 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Dec 13 00:26:58.022000 audit[5492]: USER_END pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.028356 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:57808.service: Deactivated successfully. Dec 13 00:26:58.023000 audit[5492]: CRED_DISP pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.030612 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 00:26:58.031501 systemd-logind[1630]: Session 12 logged out. Waiting for processes to exit. Dec 13 00:26:58.033055 systemd-logind[1630]: Removed session 12. Dec 13 00:26:58.034446 kernel: audit: type=1106 audit(1765585618.022:789): pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.034497 kernel: audit: type=1104 audit(1765585618.023:790): pid=5492 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:26:58.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.109:22-10.0.0.1:57808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:00.463502 containerd[1658]: time="2025-12-13T00:27:00.463333945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:27:00.820142 containerd[1658]: time="2025-12-13T00:27:00.820056549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:00.821356 containerd[1658]: time="2025-12-13T00:27:00.821307490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:27:00.821415 containerd[1658]: time="2025-12-13T00:27:00.821346615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:00.821585 kubelet[2837]: E1213 00:27:00.821539 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:00.822006 kubelet[2837]: E1213 00:27:00.821593 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:00.822006 kubelet[2837]: E1213 00:27:00.821687 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:00.822701 containerd[1658]: time="2025-12-13T00:27:00.822658744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:27:01.188552 containerd[1658]: time="2025-12-13T00:27:01.188410697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:01.271215 containerd[1658]: time="2025-12-13T00:27:01.271141112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:01.272844 containerd[1658]: time="2025-12-13T00:27:01.272786978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:27:01.273383 kubelet[2837]: E1213 00:27:01.273312 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:01.273383 kubelet[2837]: E1213 00:27:01.273376 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:01.273513 kubelet[2837]: E1213 00:27:01.273466 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:01.273566 kubelet[2837]: E1213 00:27:01.273512 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:27:01.462630 containerd[1658]: time="2025-12-13T00:27:01.462489325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:01.810854 containerd[1658]: time="2025-12-13T00:27:01.810771207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:01.896791 containerd[1658]: time="2025-12-13T00:27:01.896667442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:01.896791 containerd[1658]: time="2025-12-13T00:27:01.896735884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:01.897130 kubelet[2837]: E1213 00:27:01.897063 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:01.897130 kubelet[2837]: E1213 00:27:01.897124 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:01.897629 kubelet[2837]: E1213 00:27:01.897260 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:01.897629 kubelet[2837]: E1213 00:27:01.897300 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:27:02.462805 kubelet[2837]: E1213 00:27:02.462728 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:27:03.035989 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:59496.service - OpenSSH per-connection server daemon (10.0.0.1:59496). Dec 13 00:27:03.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.109:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.037646 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:03.037761 kernel: audit: type=1130 audit(1765585623.035:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.109:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.105000 audit[5512]: USER_ACCT pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.105978 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 59496 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:03.108852 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:03.107000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.114251 systemd-logind[1630]: New session 13 of user core. Dec 13 00:27:03.116610 kernel: audit: type=1101 audit(1765585623.105:793): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.116643 kernel: audit: type=1103 audit(1765585623.107:794): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.116661 kernel: audit: type=1006 audit(1765585623.107:795): pid=5512 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 00:27:03.107000 audit[5512]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12b36600 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.126273 kernel: audit: type=1300 audit(1765585623.107:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12b36600 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.126459 kernel: audit: type=1327 audit(1765585623.107:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.128149 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 00:27:03.132000 audit[5512]: USER_START pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.140272 kernel: audit: type=1105 audit(1765585623.132:796): pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.140441 kernel: audit: type=1103 audit(1765585623.134:797): pid=5516 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.134000 audit[5516]: CRED_ACQ pid=5516 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.212092 sshd[5516]: Connection closed by 10.0.0.1 port 59496 Dec 13 00:27:03.212474 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:03.213000 audit[5512]: USER_END pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.213000 audit[5512]: CRED_DISP pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.225032 kernel: audit: type=1106 audit(1765585623.213:798): pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.225091 kernel: audit: type=1104 audit(1765585623.213:799): pid=5512 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.230795 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:59496.service: Deactivated successfully. Dec 13 00:27:03.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.109:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.233022 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 00:27:03.233866 systemd-logind[1630]: Session 13 logged out. Waiting for processes to exit. Dec 13 00:27:03.236890 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Dec 13 00:27:03.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.109:22-10.0.0.1:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.238197 systemd-logind[1630]: Removed session 13. Dec 13 00:27:03.300000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.301310 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:03.302000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.302000 audit[5530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8499a480 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.302000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.303979 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:03.310301 systemd-logind[1630]: New session 14 of user core. Dec 13 00:27:03.315494 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 00:27:03.319000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.321000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.703760 sshd[5534]: Connection closed by 10.0.0.1 port 59500 Dec 13 00:27:03.705154 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:03.706000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.706000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.716429 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:59500.service: Deactivated successfully. Dec 13 00:27:03.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.109:22-10.0.0.1:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.719085 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 00:27:03.719913 systemd-logind[1630]: Session 14 logged out. Waiting for processes to exit. Dec 13 00:27:03.724492 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:59508.service - OpenSSH per-connection server daemon (10.0.0.1:59508). Dec 13 00:27:03.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.109:22-10.0.0.1:59508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.726554 systemd-logind[1630]: Removed session 14. Dec 13 00:27:03.794000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.795163 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:03.796000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.796000 audit[5545]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0dbe3220 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:03.796000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:03.798154 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:03.805706 systemd-logind[1630]: New session 15 of user core. Dec 13 00:27:03.815863 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 00:27:03.819000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.822000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.902275 sshd[5549]: Connection closed by 10.0.0.1 port 59508 Dec 13 00:27:03.901872 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:03.903000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.904000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:03.908888 systemd-logind[1630]: Session 15 logged out. Waiting for processes to exit. Dec 13 00:27:03.909139 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:59508.service: Deactivated successfully. Dec 13 00:27:03.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.109:22-10.0.0.1:59508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:03.911914 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 00:27:03.915421 systemd-logind[1630]: Removed session 15. Dec 13 00:27:04.463038 kubelet[2837]: E1213 00:27:04.462958 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:27:04.464295 containerd[1658]: time="2025-12-13T00:27:04.463103756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:27:04.905455 containerd[1658]: time="2025-12-13T00:27:04.905400250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:04.969872 containerd[1658]: time="2025-12-13T00:27:04.969751145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:27:04.969872 containerd[1658]: time="2025-12-13T00:27:04.969799427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:04.970104 kubelet[2837]: E1213 00:27:04.970052 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:04.970197 kubelet[2837]: E1213 00:27:04.970113 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:04.970261 kubelet[2837]: E1213 00:27:04.970204 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:04.970297 kubelet[2837]: E1213 00:27:04.970261 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:27:06.079208 kubelet[2837]: E1213 00:27:06.079172 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:06.463379 containerd[1658]: time="2025-12-13T00:27:06.463212770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:06.973524 containerd[1658]: time="2025-12-13T00:27:06.973457982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:07.064973 containerd[1658]: time="2025-12-13T00:27:07.064900511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:07.065147 containerd[1658]: time="2025-12-13T00:27:07.065000722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:07.065285 kubelet[2837]: E1213 00:27:07.065201 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:07.065376 kubelet[2837]: E1213 00:27:07.065281 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:07.065489 kubelet[2837]: E1213 00:27:07.065452 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:07.065558 kubelet[2837]: E1213 00:27:07.065490 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:27:07.065693 containerd[1658]: time="2025-12-13T00:27:07.065666461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:27:07.461565 kubelet[2837]: E1213 00:27:07.461511 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:07.491305 containerd[1658]: time="2025-12-13T00:27:07.491221241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:07.525134 containerd[1658]: time="2025-12-13T00:27:07.525051482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:27:07.525310 containerd[1658]: time="2025-12-13T00:27:07.525148627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:07.525381 kubelet[2837]: E1213 00:27:07.525331 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:07.525423 kubelet[2837]: E1213 00:27:07.525385 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:07.525488 kubelet[2837]: E1213 00:27:07.525469 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:07.525526 kubelet[2837]: E1213 00:27:07.525503 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:27:08.920394 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:59518.service - OpenSSH per-connection server daemon (10.0.0.1:59518). Dec 13 00:27:08.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.109:22-10.0.0.1:59518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:08.925288 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 00:27:08.925362 kernel: audit: type=1130 audit(1765585628.920:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.109:22-10.0.0.1:59518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.001000 audit[5590]: USER_ACCT pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.002431 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 59518 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:09.005393 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:09.003000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.010537 systemd-logind[1630]: New session 16 of user core. Dec 13 00:27:09.014590 kernel: audit: type=1101 audit(1765585629.001:820): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.014650 kernel: audit: type=1103 audit(1765585629.003:821): pid=5590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.014678 kernel: audit: type=1006 audit(1765585629.003:822): pid=5590 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 00:27:09.018515 kernel: audit: type=1300 audit(1765585629.003:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4177f6b0 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:09.003000 audit[5590]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4177f6b0 a2=3 a3=0 items=0 ppid=1 pid=5590 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:09.025537 kernel: audit: type=1327 audit(1765585629.003:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:09.003000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:09.033477 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 00:27:09.036000 audit[5590]: USER_START pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.038000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.051654 kernel: audit: type=1105 audit(1765585629.036:823): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.051738 kernel: audit: type=1103 audit(1765585629.038:824): pid=5594 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.179699 sshd[5594]: Connection closed by 10.0.0.1 port 59518 Dec 13 00:27:09.180434 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:09.181000 audit[5590]: USER_END pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.184882 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:59518.service: Deactivated successfully. Dec 13 00:27:09.187677 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 00:27:09.190278 kernel: audit: type=1106 audit(1765585629.181:825): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.181000 audit[5590]: CRED_DISP pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:09.190577 systemd-logind[1630]: Session 16 logged out. Waiting for processes to exit. Dec 13 00:27:09.191843 systemd-logind[1630]: Removed session 16. Dec 13 00:27:09.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.109:22-10.0.0.1:59518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:09.196291 kernel: audit: type=1104 audit(1765585629.181:826): pid=5590 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:12.461204 kubelet[2837]: E1213 00:27:12.461148 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:14.204310 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:38358.service - OpenSSH per-connection server daemon (10.0.0.1:38358). Dec 13 00:27:14.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.109:22-10.0.0.1:38358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:14.205879 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:14.206014 kernel: audit: type=1130 audit(1765585634.204:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.109:22-10.0.0.1:38358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:14.295000 audit[5611]: USER_ACCT pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.296549 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 38358 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:14.299052 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:14.296000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.304855 systemd-logind[1630]: New session 17 of user core. Dec 13 00:27:14.308996 kernel: audit: type=1101 audit(1765585634.295:829): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.309068 kernel: audit: type=1103 audit(1765585634.296:830): pid=5611 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.309091 kernel: audit: type=1006 audit(1765585634.296:831): pid=5611 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 00:27:14.296000 audit[5611]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc850a0fe0 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:14.319684 kernel: audit: type=1300 audit(1765585634.296:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc850a0fe0 a2=3 a3=0 items=0 ppid=1 pid=5611 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:14.319773 kernel: audit: type=1327 audit(1765585634.296:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:14.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:14.327565 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 00:27:14.329000 audit[5611]: USER_START pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.331000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.343369 kernel: audit: type=1105 audit(1765585634.329:832): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.344314 kernel: audit: type=1103 audit(1765585634.331:833): pid=5615 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.418434 sshd[5615]: Connection closed by 10.0.0.1 port 38358 Dec 13 00:27:14.418821 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:14.419000 audit[5611]: USER_END pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.425000 audit[5611]: CRED_DISP pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.429370 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:38358.service: Deactivated successfully. Dec 13 00:27:14.431773 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 00:27:14.434125 kernel: audit: type=1106 audit(1765585634.419:834): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.434320 kernel: audit: type=1104 audit(1765585634.425:835): pid=5611 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:14.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.109:22-10.0.0.1:38358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:14.434910 systemd-logind[1630]: Session 17 logged out. Waiting for processes to exit. Dec 13 00:27:14.436001 systemd-logind[1630]: Removed session 17. Dec 13 00:27:14.461909 kubelet[2837]: E1213 00:27:14.461508 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:14.463014 kubelet[2837]: E1213 00:27:14.462954 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:27:15.461891 kubelet[2837]: E1213 00:27:15.461830 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:27:15.462320 containerd[1658]: time="2025-12-13T00:27:15.462276435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:16.060209 containerd[1658]: time="2025-12-13T00:27:16.060147037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:16.131223 containerd[1658]: time="2025-12-13T00:27:16.131144066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:16.131223 containerd[1658]: time="2025-12-13T00:27:16.131219880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:16.131490 kubelet[2837]: E1213 00:27:16.131446 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:16.131873 kubelet[2837]: E1213 00:27:16.131500 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:16.131873 kubelet[2837]: E1213 00:27:16.131597 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-klw9n_calico-apiserver(f7f994f0-f034-4b20-81af-4664a13b71bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:16.131873 kubelet[2837]: E1213 00:27:16.131635 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:27:17.462155 containerd[1658]: time="2025-12-13T00:27:17.462113061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:27:17.793568 containerd[1658]: time="2025-12-13T00:27:17.793490404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:17.794967 containerd[1658]: time="2025-12-13T00:27:17.794892406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:27:17.794967 containerd[1658]: time="2025-12-13T00:27:17.794937553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:17.795245 kubelet[2837]: E1213 00:27:17.795190 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:17.795730 kubelet[2837]: E1213 00:27:17.795279 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:27:17.795730 kubelet[2837]: E1213 00:27:17.795375 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:17.796575 containerd[1658]: time="2025-12-13T00:27:17.796392236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:27:18.151203 containerd[1658]: time="2025-12-13T00:27:18.151042887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:18.152473 containerd[1658]: time="2025-12-13T00:27:18.152308360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:27:18.152473 containerd[1658]: time="2025-12-13T00:27:18.152348777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:18.152650 kubelet[2837]: E1213 00:27:18.152596 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:18.152694 kubelet[2837]: E1213 00:27:18.152655 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:27:18.152781 kubelet[2837]: E1213 00:27:18.152747 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6559dbc8c-wbknq_calico-system(673f283b-d5b4-4c9a-b0fa-82c7c33a08c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:18.152839 kubelet[2837]: E1213 00:27:18.152800 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:27:18.463078 kubelet[2837]: E1213 00:27:18.462316 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:27:19.437088 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:38372.service - OpenSSH per-connection server daemon (10.0.0.1:38372). Dec 13 00:27:19.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.109:22-10.0.0.1:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:19.440742 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:19.441462 kernel: audit: type=1130 audit(1765585639.435:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.109:22-10.0.0.1:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:19.461284 kubelet[2837]: E1213 00:27:19.461213 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:19.493000 audit[5635]: USER_ACCT pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.494572 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 38372 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:19.496967 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:19.494000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.501946 systemd-logind[1630]: New session 18 of user core. Dec 13 00:27:19.506473 kernel: audit: type=1101 audit(1765585639.493:838): pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.506545 kernel: audit: type=1103 audit(1765585639.494:839): pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.506589 kernel: audit: type=1006 audit(1765585639.494:840): pid=5635 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 00:27:19.509712 kernel: audit: type=1300 audit(1765585639.494:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdada3b4d0 a2=3 a3=0 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:19.494000 audit[5635]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdada3b4d0 a2=3 a3=0 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:19.494000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:19.517784 kernel: audit: type=1327 audit(1765585639.494:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:19.518516 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 00:27:19.520000 audit[5635]: USER_START pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.522000 audit[5639]: CRED_ACQ pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.532981 kernel: audit: type=1105 audit(1765585639.520:841): pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.533092 kernel: audit: type=1103 audit(1765585639.522:842): pid=5639 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.644948 sshd[5639]: Connection closed by 10.0.0.1 port 38372 Dec 13 00:27:19.645260 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:19.645000 audit[5635]: USER_END pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.650256 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:38372.service: Deactivated successfully. Dec 13 00:27:19.652478 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 00:27:19.645000 audit[5635]: CRED_DISP pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.653453 systemd-logind[1630]: Session 18 logged out. Waiting for processes to exit. Dec 13 00:27:19.655208 systemd-logind[1630]: Removed session 18. Dec 13 00:27:19.658000 kernel: audit: type=1106 audit(1765585639.645:843): pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.658074 kernel: audit: type=1104 audit(1765585639.645:844): pid=5635 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:19.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.109:22-10.0.0.1:38372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:20.463326 kubelet[2837]: E1213 00:27:20.463194 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:27:22.463060 kubelet[2837]: E1213 00:27:22.462976 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:27:24.661085 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:51154.service - OpenSSH per-connection server daemon (10.0.0.1:51154). Dec 13 00:27:24.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.109:22-10.0.0.1:51154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:24.662783 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:24.662835 kernel: audit: type=1130 audit(1765585644.659:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.109:22-10.0.0.1:51154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:24.731000 audit[5656]: USER_ACCT pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.733211 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 51154 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:24.736517 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:24.731000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.743135 systemd-logind[1630]: New session 19 of user core. Dec 13 00:27:24.746400 kernel: audit: type=1101 audit(1765585644.731:847): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.746483 kernel: audit: type=1103 audit(1765585644.731:848): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.746515 kernel: audit: type=1006 audit(1765585644.731:849): pid=5656 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 00:27:24.731000 audit[5656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd869fb360 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:24.765849 kernel: audit: type=1300 audit(1765585644.731:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd869fb360 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:24.765941 kernel: audit: type=1327 audit(1765585644.731:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:24.731000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:24.783585 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 00:27:24.785000 audit[5656]: USER_START pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.787000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.808002 kernel: audit: type=1105 audit(1765585644.785:850): pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.808052 kernel: audit: type=1103 audit(1765585644.787:851): pid=5660 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.885411 sshd[5660]: Connection closed by 10.0.0.1 port 51154 Dec 13 00:27:24.885716 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:24.885000 audit[5656]: USER_END pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.890461 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:51154.service: Deactivated successfully. Dec 13 00:27:24.893523 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 00:27:24.894824 systemd-logind[1630]: Session 19 logged out. Waiting for processes to exit. Dec 13 00:27:24.899720 kernel: audit: type=1106 audit(1765585644.885:852): pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.899791 kernel: audit: type=1104 audit(1765585644.886:853): pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.886000 audit[5656]: CRED_DISP pid=5656 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:24.896290 systemd-logind[1630]: Removed session 19. Dec 13 00:27:24.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.109:22-10.0.0.1:51154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:26.463670 containerd[1658]: time="2025-12-13T00:27:26.463386263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:27:26.853820 containerd[1658]: time="2025-12-13T00:27:26.853753943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:26.926172 containerd[1658]: time="2025-12-13T00:27:26.926102127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:27:26.926384 containerd[1658]: time="2025-12-13T00:27:26.926148816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:26.926420 kubelet[2837]: E1213 00:27:26.926367 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:26.926745 kubelet[2837]: E1213 00:27:26.926433 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:27:26.926745 kubelet[2837]: E1213 00:27:26.926524 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:26.927615 containerd[1658]: time="2025-12-13T00:27:26.927569157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:27:27.309492 containerd[1658]: time="2025-12-13T00:27:27.309421405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:27.379628 containerd[1658]: time="2025-12-13T00:27:27.379539123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:27.379628 containerd[1658]: time="2025-12-13T00:27:27.379592243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:27:27.379951 kubelet[2837]: E1213 00:27:27.379885 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:27.380023 kubelet[2837]: E1213 00:27:27.379949 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:27:27.380099 kubelet[2837]: E1213 00:27:27.380069 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8pkv7_calico-system(7c357da7-f81d-4093-8d71-96d21eb95cdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:27.380167 kubelet[2837]: E1213 00:27:27.380121 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:27:29.461843 kubelet[2837]: E1213 00:27:29.461788 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:27:29.462798 containerd[1658]: time="2025-12-13T00:27:29.462743784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:27:29.835133 containerd[1658]: time="2025-12-13T00:27:29.835053861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:29.836701 containerd[1658]: time="2025-12-13T00:27:29.836620068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:27:29.836913 containerd[1658]: time="2025-12-13T00:27:29.836702754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:29.836971 kubelet[2837]: E1213 00:27:29.836900 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:29.837035 kubelet[2837]: E1213 00:27:29.836971 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:27:29.837221 kubelet[2837]: E1213 00:27:29.837154 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgdw_calico-system(221701a5-b818-49d6-9c29-c4e060d651fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:29.837221 kubelet[2837]: E1213 00:27:29.837210 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:27:29.903897 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:51166.service - OpenSSH per-connection server daemon (10.0.0.1:51166). Dec 13 00:27:29.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.109:22-10.0.0.1:51166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:29.908282 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:29.908363 kernel: audit: type=1130 audit(1765585649.902:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.109:22-10.0.0.1:51166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:29.977000 audit[5673]: USER_ACCT pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:29.980585 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 51166 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:29.983910 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:29.981000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:29.991501 systemd-logind[1630]: New session 20 of user core. Dec 13 00:27:30.022423 kernel: audit: type=1101 audit(1765585649.977:856): pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.022571 kernel: audit: type=1103 audit(1765585649.981:857): pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.026721 kernel: audit: type=1006 audit(1765585649.981:858): pid=5673 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 00:27:30.026802 kernel: audit: type=1300 audit(1765585649.981:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedbb77750 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:29.981000 audit[5673]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedbb77750 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:30.032757 kernel: audit: type=1327 audit(1765585649.981:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:29.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:30.036515 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 00:27:30.039000 audit[5673]: USER_START pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.041000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.053268 kernel: audit: type=1105 audit(1765585650.039:859): pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.053356 kernel: audit: type=1103 audit(1765585650.041:860): pid=5677 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.116368 sshd[5677]: Connection closed by 10.0.0.1 port 51166 Dec 13 00:27:30.116583 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:30.117000 audit[5673]: USER_END pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.117000 audit[5673]: CRED_DISP pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.243722 kernel: audit: type=1106 audit(1765585650.117:861): pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.244045 kernel: audit: type=1104 audit(1765585650.117:862): pid=5673 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.250194 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:51166.service: Deactivated successfully. Dec 13 00:27:30.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.109:22-10.0.0.1:51166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:30.252378 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 00:27:30.253188 systemd-logind[1630]: Session 20 logged out. Waiting for processes to exit. Dec 13 00:27:30.257065 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:46396.service - OpenSSH per-connection server daemon (10.0.0.1:46396). Dec 13 00:27:30.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.109:22-10.0.0.1:46396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:30.257981 systemd-logind[1630]: Removed session 20. Dec 13 00:27:30.316000 audit[5690]: USER_ACCT pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.317899 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 46396 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:30.318000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.318000 audit[5690]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2c8b0000 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:30.318000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:30.321092 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:30.332223 systemd-logind[1630]: New session 21 of user core. Dec 13 00:27:30.351116 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 00:27:30.355000 audit[5690]: USER_START pid=5690 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.358000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:30.463349 containerd[1658]: time="2025-12-13T00:27:30.463201386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:30.464426 kubelet[2837]: E1213 00:27:30.464366 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:27:30.926687 containerd[1658]: time="2025-12-13T00:27:30.926603783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:31.114222 containerd[1658]: time="2025-12-13T00:27:31.114078412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:31.115300 containerd[1658]: time="2025-12-13T00:27:31.114300343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:31.115339 kubelet[2837]: E1213 00:27:31.114451 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:31.115339 kubelet[2837]: E1213 00:27:31.114492 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:31.115339 kubelet[2837]: E1213 00:27:31.114556 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8d68b8c7b-pl252_calico-apiserver(06395f50-a88f-48a6-b5f1-47617410b0b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:31.115339 kubelet[2837]: E1213 00:27:31.114585 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:27:31.754979 sshd[5695]: Connection closed by 10.0.0.1 port 46396 Dec 13 00:27:31.755302 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:31.755000 audit[5690]: USER_END pid=5690 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:31.756000 audit[5690]: CRED_DISP pid=5690 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:31.766833 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:46396.service: Deactivated successfully. Dec 13 00:27:31.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.109:22-10.0.0.1:46396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:31.769038 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 00:27:31.769831 systemd-logind[1630]: Session 21 logged out. Waiting for processes to exit. Dec 13 00:27:31.771855 systemd-logind[1630]: Removed session 21. Dec 13 00:27:31.773100 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:46410.service - OpenSSH per-connection server daemon (10.0.0.1:46410). Dec 13 00:27:31.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.109:22-10.0.0.1:46410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:31.887000 audit[5707]: USER_ACCT pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:31.889252 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 46410 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:31.889000 audit[5707]: CRED_ACQ pid=5707 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:31.889000 audit[5707]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe940c5680 a2=3 a3=0 items=0 ppid=1 pid=5707 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:31.889000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:31.892303 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:31.898069 systemd-logind[1630]: New session 22 of user core. Dec 13 00:27:31.911538 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 00:27:31.914000 audit[5707]: USER_START pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:31.916000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:32.462257 containerd[1658]: time="2025-12-13T00:27:32.462170435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:27:32.879000 audit[5746]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:32.879000 audit[5746]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffce0f08fb0 a2=0 a3=7ffce0f08f9c items=0 ppid=2990 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:32.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:32.890000 audit[5746]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:32.890000 audit[5746]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce0f08fb0 a2=0 a3=0 items=0 ppid=2990 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:32.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:32.899524 sshd[5713]: Connection closed by 10.0.0.1 port 46410 Dec 13 00:27:32.901100 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:32.901000 audit[5707]: USER_END pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:32.902000 audit[5707]: CRED_DISP pid=5707 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:32.913087 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:46410.service: Deactivated successfully. Dec 13 00:27:32.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.109:22-10.0.0.1:46410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:32.915449 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 00:27:32.916395 systemd-logind[1630]: Session 22 logged out. Waiting for processes to exit. Dec 13 00:27:32.920113 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:46420.service - OpenSSH per-connection server daemon (10.0.0.1:46420). Dec 13 00:27:32.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.109:22-10.0.0.1:46420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:32.921319 systemd-logind[1630]: Removed session 22. Dec 13 00:27:32.925031 containerd[1658]: time="2025-12-13T00:27:32.924991814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:32.999997 containerd[1658]: time="2025-12-13T00:27:32.999917743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:27:33.000102 containerd[1658]: time="2025-12-13T00:27:32.999968500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:33.000280 kubelet[2837]: E1213 00:27:33.000224 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:33.000626 kubelet[2837]: E1213 00:27:33.000288 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:27:33.000626 kubelet[2837]: E1213 00:27:33.000371 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6dc9f4c9d-8t67s_calico-system(ebf778c3-930a-43a3-9210-8534e588628e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:33.000626 kubelet[2837]: E1213 00:27:33.000400 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:27:33.016000 audit[5751]: USER_ACCT pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.017969 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 46420 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:33.017000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.018000 audit[5751]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe870e5410 a2=3 a3=0 items=0 ppid=1 pid=5751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:33.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:33.020509 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:33.025620 systemd-logind[1630]: New session 23 of user core. Dec 13 00:27:33.035390 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 00:27:33.036000 audit[5751]: USER_START pid=5751 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.038000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.377945 sshd[5755]: Connection closed by 10.0.0.1 port 46420 Dec 13 00:27:33.378601 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:33.381000 audit[5751]: USER_END pid=5751 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.381000 audit[5751]: CRED_DISP pid=5751 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.389204 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:46420.service: Deactivated successfully. Dec 13 00:27:33.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.109:22-10.0.0.1:46420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:33.392176 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 00:27:33.398699 systemd-logind[1630]: Session 23 logged out. Waiting for processes to exit. Dec 13 00:27:33.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.109:22-10.0.0.1:46428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:33.400157 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:46428.service - OpenSSH per-connection server daemon (10.0.0.1:46428). Dec 13 00:27:33.403437 systemd-logind[1630]: Removed session 23. Dec 13 00:27:33.458000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.459732 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 46428 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:33.459000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.459000 audit[5767]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea67a3190 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:33.459000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:33.462223 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:33.463878 containerd[1658]: time="2025-12-13T00:27:33.463832727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:27:33.468268 systemd-logind[1630]: New session 24 of user core. Dec 13 00:27:33.475595 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 00:27:33.483000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.485000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.578205 sshd[5771]: Connection closed by 10.0.0.1 port 46428 Dec 13 00:27:33.578580 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:33.578000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.579000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:33.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.109:22-10.0.0.1:46428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:33.583611 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:46428.service: Deactivated successfully. Dec 13 00:27:33.586014 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 00:27:33.587055 systemd-logind[1630]: Session 24 logged out. Waiting for processes to exit. Dec 13 00:27:33.588507 systemd-logind[1630]: Removed session 24. Dec 13 00:27:33.846782 containerd[1658]: time="2025-12-13T00:27:33.846719706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:27:33.859039 containerd[1658]: time="2025-12-13T00:27:33.858950785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:27:33.859039 containerd[1658]: time="2025-12-13T00:27:33.859006030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:27:33.859339 kubelet[2837]: E1213 00:27:33.859273 2837 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:33.859339 kubelet[2837]: E1213 00:27:33.859331 2837 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:27:33.859466 kubelet[2837]: E1213 00:27:33.859440 2837 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f7d758f46-d75hr_calico-apiserver(4deb6945-66eb-45de-ac81-4441491473f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:27:33.859510 kubelet[2837]: E1213 00:27:33.859481 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:27:33.911000 audit[5785]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:33.911000 audit[5785]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc19546300 a2=0 a3=7ffc195462ec items=0 ppid=2990 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:33.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:33.922000 audit[5785]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:33.922000 audit[5785]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc19546300 a2=0 a3=0 items=0 ppid=2990 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:33.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:38.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.109:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:38.591378 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Dec 13 00:27:38.602649 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 00:27:38.602717 kernel: audit: type=1130 audit(1765585658.590:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.109:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:38.671000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.673754 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:38.676484 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:38.687086 kernel: audit: type=1101 audit(1765585658.671:905): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.687198 kernel: audit: type=1103 audit(1765585658.673:906): pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.673000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.686192 systemd-logind[1630]: New session 25 of user core. Dec 13 00:27:38.673000 audit[5813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf56eefd0 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:38.698214 kernel: audit: type=1006 audit(1765585658.673:907): pid=5813 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 00:27:38.698335 kernel: audit: type=1300 audit(1765585658.673:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf56eefd0 a2=3 a3=0 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:38.698374 kernel: audit: type=1327 audit(1765585658.673:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:38.673000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:38.699458 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 00:27:38.700000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.710972 kernel: audit: type=1105 audit(1765585658.700:908): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.703000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.717339 kernel: audit: type=1103 audit(1765585658.703:909): pid=5817 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.796514 sshd[5817]: Connection closed by 10.0.0.1 port 46442 Dec 13 00:27:38.796857 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:38.797000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.803428 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:46442.service: Deactivated successfully. Dec 13 00:27:38.810723 kernel: audit: type=1106 audit(1765585658.797:910): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.810807 kernel: audit: type=1104 audit(1765585658.797:911): pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.797000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:38.809986 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 00:27:38.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.109:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:38.812474 systemd-logind[1630]: Session 25 logged out. Waiting for processes to exit. Dec 13 00:27:38.814275 systemd-logind[1630]: Removed session 25. Dec 13 00:27:40.463615 kubelet[2837]: E1213 00:27:40.463550 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:27:41.463048 kubelet[2837]: E1213 00:27:41.462993 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:27:42.653000 audit[5833]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:42.653000 audit[5833]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffedbd2afb0 a2=0 a3=7ffedbd2af9c items=0 ppid=2990 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:42.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:42.671000 audit[5833]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=5833 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:27:42.671000 audit[5833]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffedbd2afb0 a2=0 a3=7ffedbd2af9c items=0 ppid=2990 pid=5833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:42.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:27:43.461949 kubelet[2837]: E1213 00:27:43.461901 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:43.462795 kubelet[2837]: E1213 00:27:43.462519 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd" Dec 13 00:27:43.814447 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:51428.service - OpenSSH per-connection server daemon (10.0.0.1:51428). Dec 13 00:27:43.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.109:22-10.0.0.1:51428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:43.834676 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:27:43.834799 kernel: audit: type=1130 audit(1765585663.813:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.109:22-10.0.0.1:51428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:43.895000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.897222 sshd[5835]: Accepted publickey for core from 10.0.0.1 port 51428 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:43.900748 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:43.897000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.908158 kernel: audit: type=1101 audit(1765585663.895:916): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.908220 kernel: audit: type=1103 audit(1765585663.897:917): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.908263 kernel: audit: type=1006 audit(1765585663.897:918): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 00:27:43.907073 systemd-logind[1630]: New session 26 of user core. Dec 13 00:27:43.897000 audit[5835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca155d840 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:43.916935 kernel: audit: type=1300 audit(1765585663.897:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca155d840 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:43.916998 kernel: audit: type=1327 audit(1765585663.897:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:43.897000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:43.917413 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 00:27:43.918000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.920000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.935583 kernel: audit: type=1105 audit(1765585663.918:919): pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:43.935643 kernel: audit: type=1103 audit(1765585663.920:920): pid=5839 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:44.012143 sshd[5839]: Connection closed by 10.0.0.1 port 51428 Dec 13 00:27:44.012732 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:44.013000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:44.018651 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:51428.service: Deactivated successfully. Dec 13 00:27:44.013000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:44.021039 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 00:27:44.022199 systemd-logind[1630]: Session 26 logged out. Waiting for processes to exit. Dec 13 00:27:44.023736 systemd-logind[1630]: Removed session 26. Dec 13 00:27:44.025410 kernel: audit: type=1106 audit(1765585664.013:921): pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:44.025517 kernel: audit: type=1104 audit(1765585664.013:922): pid=5835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:44.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.109:22-10.0.0.1:51428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:44.462519 kubelet[2837]: E1213 00:27:44.462463 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6dc9f4c9d-8t67s" podUID="ebf778c3-930a-43a3-9210-8534e588628e" Dec 13 00:27:44.463622 kubelet[2837]: E1213 00:27:44.462500 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-klw9n" podUID="f7f994f0-f034-4b20-81af-4664a13b71bc" Dec 13 00:27:45.462402 kubelet[2837]: E1213 00:27:45.462170 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8d68b8c7b-pl252" podUID="06395f50-a88f-48a6-b5f1-47617410b0b2" Dec 13 00:27:46.462701 kubelet[2837]: E1213 00:27:46.462090 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f7d758f46-d75hr" podUID="4deb6945-66eb-45de-ac81-4441491473f3" Dec 13 00:27:46.655638 update_engine[1636]: I20251213 00:27:46.655577 1636 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 00:27:46.655638 update_engine[1636]: I20251213 00:27:46.655637 1636 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 00:27:46.656562 update_engine[1636]: I20251213 00:27:46.656528 1636 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 00:27:46.657065 update_engine[1636]: I20251213 00:27:46.657032 1636 omaha_request_params.cc:62] Current group set to alpha Dec 13 00:27:46.657217 update_engine[1636]: I20251213 00:27:46.657183 1636 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 00:27:46.657217 update_engine[1636]: I20251213 00:27:46.657199 1636 update_attempter.cc:643] Scheduling an action processor start. Dec 13 00:27:46.657293 update_engine[1636]: I20251213 00:27:46.657222 1636 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 00:27:46.657364 update_engine[1636]: I20251213 00:27:46.657332 1636 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 00:27:46.657437 update_engine[1636]: I20251213 00:27:46.657415 1636 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 00:27:46.657515 update_engine[1636]: I20251213 00:27:46.657496 1636 omaha_request_action.cc:272] Request: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657515 update_engine[1636]: Dec 13 00:27:46.657760 update_engine[1636]: I20251213 00:27:46.657728 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 00:27:46.661892 update_engine[1636]: I20251213 00:27:46.661856 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 00:27:46.662521 update_engine[1636]: I20251213 00:27:46.662474 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 00:27:46.676631 update_engine[1636]: E20251213 00:27:46.676587 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 13 00:27:46.676705 update_engine[1636]: I20251213 00:27:46.676661 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 00:27:46.703147 locksmithd[1697]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 00:27:49.025049 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:51440.service - OpenSSH per-connection server daemon (10.0.0.1:51440). Dec 13 00:27:49.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.109:22-10.0.0.1:51440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:49.026781 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:49.026905 kernel: audit: type=1130 audit(1765585669.024:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.109:22-10.0.0.1:51440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:49.110000 audit[5854]: USER_ACCT pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.112445 sshd[5854]: Accepted publickey for core from 10.0.0.1 port 51440 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:49.116385 sshd-session[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:49.117316 kernel: audit: type=1101 audit(1765585669.110:925): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.114000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.126840 kernel: audit: type=1103 audit(1765585669.114:926): pid=5854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.126987 kernel: audit: type=1006 audit(1765585669.114:927): pid=5854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 00:27:49.114000 audit[5854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfe7bff20 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:49.128388 systemd-logind[1630]: New session 27 of user core. Dec 13 00:27:49.134483 kernel: audit: type=1300 audit(1765585669.114:927): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfe7bff20 a2=3 a3=0 items=0 ppid=1 pid=5854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:49.114000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:49.137202 kernel: audit: type=1327 audit(1765585669.114:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:49.138510 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 00:27:49.142000 audit[5854]: USER_START pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.151587 kernel: audit: type=1105 audit(1765585669.142:928): pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.151700 kernel: audit: type=1103 audit(1765585669.144:929): pid=5858 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.144000 audit[5858]: CRED_ACQ pid=5858 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.247087 sshd[5858]: Connection closed by 10.0.0.1 port 51440 Dec 13 00:27:49.247355 sshd-session[5854]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:49.248000 audit[5854]: USER_END pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.252216 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:51440.service: Deactivated successfully. Dec 13 00:27:49.254285 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 00:27:49.255101 systemd-logind[1630]: Session 27 logged out. Waiting for processes to exit. Dec 13 00:27:49.260168 kernel: audit: type=1106 audit(1765585669.248:930): pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.260226 kernel: audit: type=1104 audit(1765585669.248:931): pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.248000 audit[5854]: CRED_DISP pid=5854 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:49.256409 systemd-logind[1630]: Removed session 27. Dec 13 00:27:49.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.109:22-10.0.0.1:51440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:50.465867 kubelet[2837]: E1213 00:27:50.465826 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:53.464723 kubelet[2837]: E1213 00:27:53.464641 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6559dbc8c-wbknq" podUID="673f283b-d5b4-4c9a-b0fa-82c7c33a08c0" Dec 13 00:27:54.266638 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:47832.service - OpenSSH per-connection server daemon (10.0.0.1:47832). Dec 13 00:27:54.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.109:22-10.0.0.1:47832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:54.268374 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:27:54.268505 kernel: audit: type=1130 audit(1765585674.266:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.109:22-10.0.0.1:47832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:54.329000 audit[5873]: USER_ACCT pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.330154 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 47832 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:27:54.332729 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:27:54.331000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.337914 systemd-logind[1630]: New session 28 of user core. Dec 13 00:27:54.339461 kernel: audit: type=1101 audit(1765585674.329:934): pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.339507 kernel: audit: type=1103 audit(1765585674.331:935): pid=5873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.339550 kernel: audit: type=1006 audit(1765585674.331:936): pid=5873 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 13 00:27:54.331000 audit[5873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8cad1470 a2=3 a3=0 items=0 ppid=1 pid=5873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:54.348268 kernel: audit: type=1300 audit(1765585674.331:936): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8cad1470 a2=3 a3=0 items=0 ppid=1 pid=5873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:27:54.348329 kernel: audit: type=1327 audit(1765585674.331:936): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:54.331000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:27:54.360563 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 00:27:54.362000 audit[5873]: USER_START pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.366000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.375086 kernel: audit: type=1105 audit(1765585674.362:937): pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.375169 kernel: audit: type=1103 audit(1765585674.366:938): pid=5877 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.466673 kubelet[2837]: E1213 00:27:54.466585 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8pkv7" podUID="7c357da7-f81d-4093-8d71-96d21eb95cdd" Dec 13 00:27:54.667000 audit[5873]: USER_END pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.668502 sshd[5877]: Connection closed by 10.0.0.1 port 47832 Dec 13 00:27:54.666319 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Dec 13 00:27:54.672600 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:47832.service: Deactivated successfully. Dec 13 00:27:54.675262 kernel: audit: type=1106 audit(1765585674.667:939): pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.675361 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 00:27:54.668000 audit[5873]: CRED_DISP pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.679325 systemd-logind[1630]: Session 28 logged out. Waiting for processes to exit. Dec 13 00:27:54.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.109:22-10.0.0.1:47832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:27:54.680760 kernel: audit: type=1104 audit(1765585674.668:940): pid=5873 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:27:54.680442 systemd-logind[1630]: Removed session 28. Dec 13 00:27:55.461900 kubelet[2837]: E1213 00:27:55.461852 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:27:55.462198 kubelet[2837]: E1213 00:27:55.461931 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgdw" podUID="221701a5-b818-49d6-9c29-c4e060d651fd"