Jan 22 00:27:37.784831 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 22:02:49 -00 2026 Jan 22 00:27:37.785144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:27:37.785168 kernel: BIOS-provided physical RAM map: Jan 22 00:27:37.785292 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 22 00:27:37.785302 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 22 00:27:37.785312 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 22 00:27:37.785322 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 22 00:27:37.785332 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 22 00:27:37.785430 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 22 00:27:37.785440 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 22 00:27:37.785455 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 22 00:27:37.785463 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 22 00:27:37.785472 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 22 00:27:37.785482 kernel: NX (Execute Disable) protection: active Jan 22 00:27:37.785494 kernel: APIC: Static calls initialized Jan 22 00:27:37.785509 kernel: SMBIOS 2.8 present. Jan 22 00:27:37.785702 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 22 00:27:37.785711 kernel: DMI: Memory slots populated: 1/1 Jan 22 00:27:37.785721 kernel: Hypervisor detected: KVM Jan 22 00:27:37.785730 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 22 00:27:37.785742 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 22 00:27:37.785754 kernel: kvm-clock: using sched offset of 10383480386 cycles Jan 22 00:27:37.785765 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 22 00:27:37.785775 kernel: tsc: Detected 2445.426 MHz processor Jan 22 00:27:37.785792 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 22 00:27:37.785803 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 22 00:27:37.785814 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 22 00:27:37.785825 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 22 00:27:37.785836 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 22 00:27:37.785847 kernel: Using GB pages for direct mapping Jan 22 00:27:37.785858 kernel: ACPI: Early table checksum verification disabled Jan 22 00:27:37.786074 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 22 00:27:37.786087 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786095 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786102 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786109 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 22 00:27:37.786117 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786124 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786136 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786144 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 22 00:27:37.786155 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 22 00:27:37.786162 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 22 00:27:37.786261 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 22 00:27:37.786275 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 22 00:27:37.786283 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 22 00:27:37.786290 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 22 00:27:37.786298 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 22 00:27:37.786305 kernel: No NUMA configuration found Jan 22 00:27:37.786313 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 22 00:27:37.786321 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 22 00:27:37.786331 kernel: Zone ranges: Jan 22 00:27:37.786339 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 22 00:27:37.786346 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 22 00:27:37.786354 kernel: Normal empty Jan 22 00:27:37.786361 kernel: Device empty Jan 22 00:27:37.786369 kernel: Movable zone start for each node Jan 22 00:27:37.786376 kernel: Early memory node ranges Jan 22 00:27:37.786386 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 22 00:27:37.786393 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 22 00:27:37.786401 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 22 00:27:37.786408 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 22 00:27:37.786416 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 22 00:27:37.786424 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 22 00:27:37.786431 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 22 00:27:37.786439 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 22 00:27:37.786449 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 22 00:27:37.786456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 22 00:27:37.797842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 22 00:27:37.798671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 22 00:27:37.798685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 22 00:27:37.798694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 22 00:27:37.798702 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 22 00:27:37.798765 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 22 00:27:37.798773 kernel: TSC deadline timer available Jan 22 00:27:37.798781 kernel: CPU topo: Max. logical packages: 1 Jan 22 00:27:37.798788 kernel: CPU topo: Max. logical dies: 1 Jan 22 00:27:37.798796 kernel: CPU topo: Max. dies per package: 1 Jan 22 00:27:37.798803 kernel: CPU topo: Max. threads per core: 1 Jan 22 00:27:37.798811 kernel: CPU topo: Num. cores per package: 4 Jan 22 00:27:37.798822 kernel: CPU topo: Num. threads per package: 4 Jan 22 00:27:37.798830 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 22 00:27:37.798837 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 22 00:27:37.798845 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 22 00:27:37.798853 kernel: kvm-guest: setup PV sched yield Jan 22 00:27:37.798861 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 22 00:27:37.799032 kernel: Booting paravirtualized kernel on KVM Jan 22 00:27:37.799046 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 22 00:27:37.799054 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 22 00:27:37.799062 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 22 00:27:37.799070 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 22 00:27:37.799077 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 22 00:27:37.799085 kernel: kvm-guest: PV spinlocks enabled Jan 22 00:27:37.799093 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 22 00:27:37.799105 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:27:37.799113 kernel: random: crng init done Jan 22 00:27:37.799121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 22 00:27:37.799129 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 22 00:27:37.799136 kernel: Fallback order for Node 0: 0 Jan 22 00:27:37.799144 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 22 00:27:37.799152 kernel: Policy zone: DMA32 Jan 22 00:27:37.799163 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 22 00:27:37.799271 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 22 00:27:37.799286 kernel: ftrace: allocating 40097 entries in 157 pages Jan 22 00:27:37.799300 kernel: ftrace: allocated 157 pages with 5 groups Jan 22 00:27:37.799314 kernel: Dynamic Preempt: voluntary Jan 22 00:27:37.799327 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 22 00:27:37.799340 kernel: rcu: RCU event tracing is enabled. Jan 22 00:27:37.799357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 22 00:27:37.802412 kernel: Trampoline variant of Tasks RCU enabled. Jan 22 00:27:37.802471 kernel: Rude variant of Tasks RCU enabled. Jan 22 00:27:37.802483 kernel: Tracing variant of Tasks RCU enabled. Jan 22 00:27:37.802496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 22 00:27:37.802679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 22 00:27:37.802699 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 00:27:37.802761 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 00:27:37.802773 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 22 00:27:37.802784 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 22 00:27:37.802795 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 22 00:27:37.802817 kernel: Console: colour VGA+ 80x25 Jan 22 00:27:37.802832 kernel: printk: legacy console [ttyS0] enabled Jan 22 00:27:37.802844 kernel: ACPI: Core revision 20240827 Jan 22 00:27:37.802855 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 22 00:27:37.803078 kernel: APIC: Switch to symmetric I/O mode setup Jan 22 00:27:37.803095 kernel: x2apic enabled Jan 22 00:27:37.803113 kernel: APIC: Switched APIC routing to: physical x2apic Jan 22 00:27:37.803319 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 22 00:27:37.803337 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 22 00:27:37.803356 kernel: kvm-guest: setup PV IPIs Jan 22 00:27:37.803367 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 22 00:27:37.803378 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 22 00:27:37.803391 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 22 00:27:37.803405 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 22 00:27:37.803417 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 22 00:27:37.803428 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 22 00:27:37.803445 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 22 00:27:37.803459 kernel: Spectre V2 : Mitigation: Retpolines Jan 22 00:27:37.803471 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 22 00:27:37.803482 kernel: Speculative Store Bypass: Vulnerable Jan 22 00:27:37.803496 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 22 00:27:37.803509 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 22 00:27:37.803690 kernel: active return thunk: srso_alias_return_thunk Jan 22 00:27:37.803707 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 22 00:27:37.803719 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 22 00:27:37.803731 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 22 00:27:37.803743 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 22 00:27:37.803755 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 22 00:27:37.803766 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 22 00:27:37.803778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 22 00:27:37.803793 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 22 00:27:37.803805 kernel: Freeing SMP alternatives memory: 32K Jan 22 00:27:37.803816 kernel: pid_max: default: 32768 minimum: 301 Jan 22 00:27:37.803828 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 22 00:27:37.803840 kernel: landlock: Up and running. Jan 22 00:27:37.803851 kernel: SELinux: Initializing. Jan 22 00:27:37.803865 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 00:27:37.804050 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 22 00:27:37.804063 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 22 00:27:37.804074 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 22 00:27:37.804086 kernel: signal: max sigframe size: 1776 Jan 22 00:27:37.804098 kernel: rcu: Hierarchical SRCU implementation. Jan 22 00:27:37.804110 kernel: rcu: Max phase no-delay instances is 400. Jan 22 00:27:37.804124 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 22 00:27:37.804140 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 22 00:27:37.804151 kernel: smp: Bringing up secondary CPUs ... Jan 22 00:27:37.804162 kernel: smpboot: x86: Booting SMP configuration: Jan 22 00:27:37.804273 kernel: .... node #0, CPUs: #1 #2 #3 Jan 22 00:27:37.804287 kernel: smp: Brought up 1 node, 4 CPUs Jan 22 00:27:37.804301 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 22 00:27:37.804314 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 118472K reserved, 0K cma-reserved) Jan 22 00:27:37.804331 kernel: devtmpfs: initialized Jan 22 00:27:37.804344 kernel: x86/mm: Memory block size: 128MB Jan 22 00:27:37.804356 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 22 00:27:37.804370 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 22 00:27:37.804384 kernel: pinctrl core: initialized pinctrl subsystem Jan 22 00:27:37.804395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 22 00:27:37.804406 kernel: audit: initializing netlink subsys (disabled) Jan 22 00:27:37.804425 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 22 00:27:37.804438 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 22 00:27:37.804449 kernel: audit: type=2000 audit(1769041635.641:1): state=initialized audit_enabled=0 res=1 Jan 22 00:27:37.804460 kernel: cpuidle: using governor menu Jan 22 00:27:37.804471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 22 00:27:37.804482 kernel: dca service started, version 1.12.1 Jan 22 00:27:37.804494 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 22 00:27:37.806988 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 22 00:27:37.807703 kernel: PCI: Using configuration type 1 for base access Jan 22 00:27:37.807724 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 22 00:27:37.807737 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 22 00:27:37.807750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 22 00:27:37.807763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 22 00:27:37.807776 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 22 00:27:37.807854 kernel: ACPI: Added _OSI(Module Device) Jan 22 00:27:37.808088 kernel: ACPI: Added _OSI(Processor Device) Jan 22 00:27:37.808106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 22 00:27:37.808117 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 22 00:27:37.808129 kernel: ACPI: Interpreter enabled Jan 22 00:27:37.808143 kernel: ACPI: PM: (supports S0 S3 S5) Jan 22 00:27:37.808155 kernel: ACPI: Using IOAPIC for interrupt routing Jan 22 00:27:37.808166 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 22 00:27:37.808283 kernel: PCI: Using E820 reservations for host bridge windows Jan 22 00:27:37.808296 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 22 00:27:37.808308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 22 00:27:37.810318 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 22 00:27:37.810817 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 22 00:27:37.812099 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 22 00:27:37.812129 kernel: PCI host bridge to bus 0000:00 Jan 22 00:27:37.812990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 22 00:27:37.813745 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 22 00:27:37.814020 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 22 00:27:37.814403 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 22 00:27:37.814878 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 22 00:27:37.826405 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 22 00:27:37.831481 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 22 00:27:37.833395 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 22 00:27:37.839071 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 22 00:27:37.854029 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 22 00:27:37.854913 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 22 00:27:37.855143 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 22 00:27:37.855718 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 22 00:27:37.856497 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 22 00:27:37.857363 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 22 00:27:37.857838 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 22 00:27:37.858125 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 22 00:27:37.858844 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 22 00:27:37.859764 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 22 00:27:37.860052 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 22 00:27:37.865358 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 22 00:27:37.870096 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 22 00:27:37.897707 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 22 00:27:37.899875 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 22 00:27:37.900298 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 22 00:27:37.900870 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 22 00:27:37.901279 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 22 00:27:37.901846 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 22 00:27:37.902131 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Jan 22 00:27:37.902796 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 22 00:27:37.903890 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 22 00:27:37.904282 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 22 00:27:37.908852 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 22 00:27:37.909154 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 22 00:27:37.909280 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 22 00:27:37.909295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 22 00:27:37.909306 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 22 00:27:37.909321 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 22 00:27:37.909333 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 22 00:27:37.909415 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 22 00:27:37.909427 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 22 00:27:37.909438 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 22 00:27:37.909449 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 22 00:27:37.909459 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 22 00:27:37.909471 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 22 00:27:37.909484 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 22 00:27:37.909504 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 22 00:27:37.909683 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 22 00:27:37.909698 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 22 00:27:37.909710 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 22 00:27:37.909722 kernel: iommu: Default domain type: Translated Jan 22 00:27:37.909733 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 22 00:27:37.909744 kernel: PCI: Using ACPI for IRQ routing Jan 22 00:27:37.909761 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 22 00:27:37.909772 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 22 00:27:37.909786 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 22 00:27:37.910080 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 22 00:27:37.910475 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 22 00:27:37.910968 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 22 00:27:37.910988 kernel: vgaarb: loaded Jan 22 00:27:37.911007 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 22 00:27:37.911018 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 22 00:27:37.911029 kernel: clocksource: Switched to clocksource kvm-clock Jan 22 00:27:37.911041 kernel: VFS: Disk quotas dquot_6.6.0 Jan 22 00:27:37.911056 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 22 00:27:37.911069 kernel: pnp: PnP ACPI init Jan 22 00:27:37.911862 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 22 00:27:37.911892 kernel: pnp: PnP ACPI: found 6 devices Jan 22 00:27:37.911904 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 22 00:27:37.911915 kernel: NET: Registered PF_INET protocol family Jan 22 00:27:37.911926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 22 00:27:37.911939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 22 00:27:37.911952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 22 00:27:37.911973 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 22 00:27:37.911986 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 22 00:27:37.911998 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 22 00:27:37.912009 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 00:27:37.912019 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 22 00:27:37.912030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 22 00:27:37.912041 kernel: NET: Registered PF_XDP protocol family Jan 22 00:27:37.912423 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 22 00:27:37.912873 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 22 00:27:37.913141 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 22 00:27:37.913693 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 22 00:27:37.913958 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 22 00:27:37.914322 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 22 00:27:37.914341 kernel: PCI: CLS 0 bytes, default 64 Jan 22 00:27:37.914360 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 22 00:27:37.914371 kernel: Initialise system trusted keyrings Jan 22 00:27:37.914382 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 22 00:27:37.914396 kernel: Key type asymmetric registered Jan 22 00:27:37.914409 kernel: Asymmetric key parser 'x509' registered Jan 22 00:27:37.914420 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 22 00:27:37.914430 kernel: io scheduler mq-deadline registered Jan 22 00:27:37.914446 kernel: io scheduler kyber registered Jan 22 00:27:37.914457 kernel: io scheduler bfq registered Jan 22 00:27:37.914467 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 22 00:27:37.914480 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 22 00:27:37.914495 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 22 00:27:37.914507 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 22 00:27:37.914700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 22 00:27:37.914720 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 00:27:37.914734 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 22 00:27:37.914745 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 22 00:27:37.914756 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 22 00:27:37.915033 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 22 00:27:37.915049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 22 00:27:37.915416 kernel: rtc_cmos 00:04: registered as rtc0 Jan 22 00:27:37.915885 kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T00:27:26 UTC (1769041646) Jan 22 00:27:37.916159 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 22 00:27:37.916285 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 22 00:27:37.916298 kernel: NET: Registered PF_INET6 protocol family Jan 22 00:27:37.916307 kernel: Segment Routing with IPv6 Jan 22 00:27:37.916315 kernel: In-situ OAM (IOAM) with IPv6 Jan 22 00:27:37.916323 kernel: NET: Registered PF_PACKET protocol family Jan 22 00:27:37.916336 kernel: Key type dns_resolver registered Jan 22 00:27:37.916344 kernel: IPI shorthand broadcast: enabled Jan 22 00:27:37.916352 kernel: sched_clock: Marking stable (10335076119, 662673718)->(12559272273, -1561522436) Jan 22 00:27:37.916360 kernel: registered taskstats version 1 Jan 22 00:27:37.916368 kernel: Loading compiled-in X.509 certificates Jan 22 00:27:37.916376 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3c3e07c08e874e2a4bf964a0051bfd3618f8b847' Jan 22 00:27:37.916384 kernel: Demotion targets for Node 0: null Jan 22 00:27:37.916395 kernel: Key type .fscrypt registered Jan 22 00:27:37.916403 kernel: Key type fscrypt-provisioning registered Jan 22 00:27:37.916410 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 22 00:27:37.916418 kernel: ima: Allocated hash algorithm: sha1 Jan 22 00:27:37.916426 kernel: ima: No architecture policies found Jan 22 00:27:37.916434 kernel: clk: Disabling unused clocks Jan 22 00:27:37.916442 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 22 00:27:37.916452 kernel: Write protecting the kernel read-only data: 45056k Jan 22 00:27:37.916460 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 22 00:27:37.916468 kernel: Run /init as init process Jan 22 00:27:37.916475 kernel: with arguments: Jan 22 00:27:37.916483 kernel: /init Jan 22 00:27:37.916491 kernel: with environment: Jan 22 00:27:37.916499 kernel: HOME=/ Jan 22 00:27:37.916697 kernel: TERM=linux Jan 22 00:27:37.916717 kernel: SCSI subsystem initialized Jan 22 00:27:37.916729 kernel: libata version 3.00 loaded. Jan 22 00:27:37.916958 kernel: ahci 0000:00:1f.2: version 3.0 Jan 22 00:27:37.916971 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 22 00:27:37.917344 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 22 00:27:37.917832 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 22 00:27:37.918119 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 22 00:27:37.919079 kernel: scsi host0: ahci Jan 22 00:27:37.919714 kernel: scsi host1: ahci Jan 22 00:27:37.920093 kernel: scsi host2: ahci Jan 22 00:27:37.920872 kernel: scsi host3: ahci Jan 22 00:27:37.921765 kernel: scsi host4: ahci Jan 22 00:27:37.922162 kernel: scsi host5: ahci Jan 22 00:27:37.922289 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 22 00:27:37.922302 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 22 00:27:37.922316 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 22 00:27:37.922329 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 22 00:27:37.922348 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 22 00:27:37.922361 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 22 00:27:37.922374 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 22 00:27:37.922386 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 22 00:27:37.922399 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 22 00:27:37.922411 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 22 00:27:37.922423 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 22 00:27:37.922439 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 22 00:27:37.922451 kernel: ata3.00: LPM support broken, forcing max_power Jan 22 00:27:37.922464 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 22 00:27:37.922477 kernel: ata3.00: applying bridge limits Jan 22 00:27:37.922490 kernel: ata3.00: LPM support broken, forcing max_power Jan 22 00:27:37.922501 kernel: ata3.00: configured for UDMA/100 Jan 22 00:27:37.923090 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 22 00:27:37.929504 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 22 00:27:37.930048 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 22 00:27:37.930076 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 22 00:27:37.930434 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 22 00:27:37.930451 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 22 00:27:37.934389 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 22 00:27:37.934707 kernel: GPT:16515071 != 27000831 Jan 22 00:27:37.934721 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 22 00:27:37.934733 kernel: GPT:16515071 != 27000831 Jan 22 00:27:37.934744 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 22 00:27:37.934755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 22 00:27:37.934767 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 22 00:27:37.934780 kernel: device-mapper: uevent: version 1.0.3 Jan 22 00:27:37.934801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 22 00:27:37.934813 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 22 00:27:37.934824 kernel: raid6: avx2x4 gen() 15918 MB/s Jan 22 00:27:37.934835 kernel: raid6: avx2x2 gen() 18246 MB/s Jan 22 00:27:37.934847 kernel: raid6: avx2x1 gen() 12459 MB/s Jan 22 00:27:37.934858 kernel: raid6: using algorithm avx2x2 gen() 18246 MB/s Jan 22 00:27:37.934871 kernel: raid6: .... xor() 16398 MB/s, rmw enabled Jan 22 00:27:37.934890 kernel: raid6: using avx2x2 recovery algorithm Jan 22 00:27:37.934898 kernel: xor: automatically using best checksumming function avx Jan 22 00:27:37.934907 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 22 00:27:37.934919 kernel: BTRFS: device fsid 79986906-7858-40a3-90f5-bda7e594a44c devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 22 00:27:37.934930 kernel: BTRFS info (device dm-0): first mount of filesystem 79986906-7858-40a3-90f5-bda7e594a44c Jan 22 00:27:37.934939 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:27:37.934947 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 22 00:27:37.934955 kernel: BTRFS info (device dm-0): enabling free space tree Jan 22 00:27:37.934963 kernel: loop: module loaded Jan 22 00:27:37.934971 kernel: loop0: detected capacity change from 0 to 100160 Jan 22 00:27:37.934980 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 22 00:27:37.934993 systemd[1]: Successfully made /usr/ read-only. Jan 22 00:27:37.935093 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:27:37.935103 systemd[1]: Detected virtualization kvm. Jan 22 00:27:37.935111 systemd[1]: Detected architecture x86-64. Jan 22 00:27:37.935120 systemd[1]: Running in initrd. Jan 22 00:27:37.935129 systemd[1]: No hostname configured, using default hostname. Jan 22 00:27:37.935141 systemd[1]: Hostname set to . Jan 22 00:27:37.935150 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 00:27:37.935159 systemd[1]: Queued start job for default target initrd.target. Jan 22 00:27:37.935168 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:27:37.935265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:27:37.935274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:27:37.935288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 22 00:27:37.935297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:27:37.935306 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 22 00:27:37.935321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 22 00:27:37.935330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:27:37.935340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:27:37.935352 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:27:37.935361 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:27:37.935370 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:27:37.935378 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:27:37.935387 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:27:37.935396 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:27:37.935408 kernel: hrtimer: interrupt took 14954402 ns Jan 22 00:27:37.935428 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:27:37.935441 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:27:37.935453 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 22 00:27:37.935465 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 22 00:27:37.935477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:27:37.935489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:27:37.935501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:27:37.935695 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:27:37.935706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 22 00:27:37.935715 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 22 00:27:37.935724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:27:37.935732 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 22 00:27:37.935742 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 22 00:27:37.935755 systemd[1]: Starting systemd-fsck-usr.service... Jan 22 00:27:37.935764 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:27:37.935773 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:27:37.935786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:27:37.935807 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 22 00:27:37.935821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:27:37.935833 systemd[1]: Finished systemd-fsck-usr.service. Jan 22 00:27:37.936123 systemd-journald[321]: Collecting audit messages is enabled. Jan 22 00:27:37.936156 kernel: audit: type=1130 audit(1769041657.694:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:37.936167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:27:37.936283 systemd-journald[321]: Journal started Jan 22 00:27:37.936302 systemd-journald[321]: Runtime Journal (/run/log/journal/c9edd8c2858c4760bc873f24db2083c5) is 6M, max 48.2M, 42.2M free. Jan 22 00:27:37.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:37.998851 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:27:37.999435 kernel: audit: type=1130 audit(1769041657.945:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:37.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:38.019135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:27:38.301028 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 22 00:27:38.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:38.303941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:27:38.410711 kernel: audit: type=1130 audit(1769041658.321:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:38.374770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:27:39.539140 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 22 00:27:39.563826 kernel: Bridge firewalling registered Jan 22 00:27:38.441429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:27:39.681846 kernel: audit: type=1130 audit(1769041659.565:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.681896 kernel: audit: type=1130 audit(1769041659.633:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:38.497014 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 22 00:27:39.567449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:27:39.719476 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:27:39.784700 kernel: audit: type=1130 audit(1769041659.728:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.792849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 22 00:27:39.827122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:27:39.899893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:27:39.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.959886 kernel: audit: type=1130 audit(1769041659.924:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.965477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:27:39.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:39.995812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:27:40.036440 kernel: audit: type=1130 audit(1769041659.981:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:40.036480 kernel: audit: type=1334 audit(1769041659.989:10): prog-id=6 op=LOAD Jan 22 00:27:39.989000 audit: BPF prog-id=6 op=LOAD Jan 22 00:27:40.053945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:27:40.116339 kernel: audit: type=1130 audit(1769041660.062:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:40.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:40.117417 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 22 00:27:40.260865 dracut-cmdline[362]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:27:40.298326 systemd-resolved[357]: Positive Trust Anchors: Jan 22 00:27:40.298340 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:27:40.298347 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:27:40.298394 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:27:40.427462 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 22 00:27:40.434150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:27:40.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:40.476367 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:27:41.331076 kernel: Loading iSCSI transport class v2.0-870. Jan 22 00:27:41.414184 kernel: iscsi: registered transport (tcp) Jan 22 00:27:41.505964 kernel: iscsi: registered transport (qla4xxx) Jan 22 00:27:41.506119 kernel: QLogic iSCSI HBA Driver Jan 22 00:27:41.738070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:27:41.925135 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:27:41.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:41.943717 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:27:42.359198 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 22 00:27:42.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:42.383948 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 22 00:27:42.413137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 22 00:27:42.639451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:27:42.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:42.654000 audit: BPF prog-id=7 op=LOAD Jan 22 00:27:42.654000 audit: BPF prog-id=8 op=LOAD Jan 22 00:27:42.656745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:27:42.786739 systemd-udevd[594]: Using default interface naming scheme 'v257'. Jan 22 00:27:42.851688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:27:42.935026 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 22 00:27:42.935333 kernel: audit: type=1130 audit(1769041662.875:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:42.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:42.884882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 22 00:27:43.022789 dracut-pre-trigger[637]: rd.md=0: removing MD RAID activation Jan 22 00:27:43.325944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:27:43.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.373960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:27:43.416894 kernel: audit: type=1130 audit(1769041663.360:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.416938 kernel: audit: type=1334 audit(1769041663.369:20): prog-id=9 op=LOAD Jan 22 00:27:43.369000 audit: BPF prog-id=9 op=LOAD Jan 22 00:27:43.434162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:27:43.498347 kernel: audit: type=1130 audit(1769041663.440:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.514771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:27:43.854487 systemd-networkd[730]: lo: Link UP Jan 22 00:27:43.854499 systemd-networkd[730]: lo: Gained carrier Jan 22 00:27:43.919144 kernel: audit: type=1130 audit(1769041663.866:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:43.860844 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:27:43.873720 systemd[1]: Reached target network.target - Network. Jan 22 00:27:44.083181 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:27:44.157866 kernel: audit: type=1130 audit(1769041664.095:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:44.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:44.158191 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 22 00:27:44.327402 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 22 00:27:44.478089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 22 00:27:44.499979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 22 00:27:44.553707 kernel: cryptd: max_cpu_qlen set to 1000 Jan 22 00:27:44.607175 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 22 00:27:44.645998 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 22 00:27:44.671969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:27:44.672072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:27:44.753936 kernel: audit: type=1131 audit(1769041664.710:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:44.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:44.711069 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:27:44.812880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:27:44.843747 disk-uuid[776]: Primary Header is updated. Jan 22 00:27:44.843747 disk-uuid[776]: Secondary Entries is updated. Jan 22 00:27:44.843747 disk-uuid[776]: Secondary Header is updated. Jan 22 00:27:44.921974 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 22 00:27:44.922043 kernel: AES CTR mode by8 optimization enabled Jan 22 00:27:45.225410 systemd-networkd[730]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:27:46.174414 disk-uuid[777]: Warning: The kernel is still using the old partition table. Jan 22 00:27:46.174414 disk-uuid[777]: The new table will be used at the next reboot or after you Jan 22 00:27:46.174414 disk-uuid[777]: run partprobe(8) or kpartx(8) Jan 22 00:27:46.174414 disk-uuid[777]: The operation has completed successfully. Jan 22 00:27:46.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:45.225891 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:27:46.295774 kernel: audit: type=1130 audit(1769041666.172:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:45.235345 systemd-networkd[730]: eth0: Link UP Jan 22 00:27:45.236350 systemd-networkd[730]: eth0: Gained carrier Jan 22 00:27:45.236370 systemd-networkd[730]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:27:45.274090 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 22 00:27:45.509056 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 22 00:27:46.177766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:27:46.298354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:27:46.374949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:27:46.532928 kernel: audit: type=1130 audit(1769041666.436:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.532977 kernel: audit: type=1131 audit(1769041666.436:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.396449 systemd-networkd[730]: eth0: Gained IPv6LL Jan 22 00:27:46.406134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 22 00:27:46.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.420427 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 22 00:27:46.420833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 22 00:27:46.535852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:27:46.588489 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 22 00:27:46.729033 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:27:46.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:46.778477 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Jan 22 00:27:46.805205 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:27:46.805365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:27:46.870056 kernel: BTRFS info (device vda6): turning on async discard Jan 22 00:27:46.870163 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 00:27:46.919824 kernel: BTRFS info (device vda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:27:46.966836 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 22 00:27:46.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:47.001779 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 22 00:27:48.223116 ignition[887]: Ignition 2.22.0 Jan 22 00:27:48.224861 ignition[887]: Stage: fetch-offline Jan 22 00:27:48.228460 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:27:48.228491 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:27:48.228846 ignition[887]: parsed url from cmdline: "" Jan 22 00:27:48.228853 ignition[887]: no config URL provided Jan 22 00:27:48.228862 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:27:48.228885 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:27:48.229040 ignition[887]: op(1): [started] loading QEMU firmware config module Jan 22 00:27:48.229050 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 22 00:27:48.522038 ignition[887]: op(1): [finished] loading QEMU firmware config module Jan 22 00:27:50.322688 ignition[887]: parsing config with SHA512: f4145d671cd5c069740798a4d863c334d76586c9632214d94b80b699c904ad998e316ee399e4dbd938024d81537ea6fbe2690e3e253013d529a4eadcd851bd79 Jan 22 00:27:50.471499 unknown[887]: fetched base config from "system" Jan 22 00:27:50.471874 unknown[887]: fetched user config from "qemu" Jan 22 00:27:50.482030 ignition[887]: fetch-offline: fetch-offline passed Jan 22 00:27:50.607099 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 22 00:27:50.607171 kernel: audit: type=1130 audit(1769041670.528:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:50.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:50.500215 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:27:50.484133 ignition[887]: Ignition finished successfully Jan 22 00:27:50.537828 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 22 00:27:50.572745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 22 00:27:51.213855 ignition[897]: Ignition 2.22.0 Jan 22 00:27:51.213972 ignition[897]: Stage: kargs Jan 22 00:27:51.222173 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:27:51.222193 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:27:51.227266 ignition[897]: kargs: kargs passed Jan 22 00:27:51.266225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 22 00:27:51.367712 kernel: audit: type=1130 audit(1769041671.304:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:51.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:51.228836 ignition[897]: Ignition finished successfully Jan 22 00:27:51.337907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 22 00:27:51.678955 ignition[905]: Ignition 2.22.0 Jan 22 00:27:51.679051 ignition[905]: Stage: disks Jan 22 00:27:51.679278 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:27:51.679295 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:27:51.714719 ignition[905]: disks: disks passed Jan 22 00:27:51.714825 ignition[905]: Ignition finished successfully Jan 22 00:27:51.856901 kernel: audit: type=1130 audit(1769041671.771:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:51.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:51.749875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 22 00:27:51.780730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 22 00:27:51.918890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 22 00:27:51.927055 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:27:52.005954 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:27:52.019943 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:27:52.029064 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 22 00:27:52.524414 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 22 00:27:52.540002 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 22 00:27:52.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:52.637946 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 22 00:27:52.671912 kernel: audit: type=1130 audit(1769041672.597:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:54.262165 kernel: EXT4-fs (vda9): mounted filesystem 2fa3c08b-a48e-45e5-aeb3-7441bca9cf30 r/w with ordered data mode. Quota mode: none. Jan 22 00:27:54.265226 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 22 00:27:54.284990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 22 00:27:54.325766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:27:54.398795 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 22 00:27:54.406998 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 22 00:27:54.500260 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Jan 22 00:27:54.407424 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 22 00:27:54.562072 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:27:54.562118 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:27:54.407468 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:27:54.638041 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 22 00:27:54.648965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 22 00:27:54.724267 kernel: BTRFS info (device vda6): turning on async discard Jan 22 00:27:54.724439 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 00:27:54.745978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:27:55.128696 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 22 00:27:55.191190 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 22 00:27:55.247025 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 22 00:27:55.297160 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 22 00:27:56.343803 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 22 00:27:56.423697 kernel: audit: type=1130 audit(1769041676.365:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:56.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:56.371798 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 22 00:27:56.455824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 22 00:27:56.504920 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 22 00:27:56.540838 kernel: BTRFS info (device vda6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:27:56.751929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 22 00:27:56.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:56.836998 kernel: audit: type=1130 audit(1769041676.797:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:57.259465 ignition[1038]: INFO : Ignition 2.22.0 Jan 22 00:27:57.259465 ignition[1038]: INFO : Stage: mount Jan 22 00:27:57.259465 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:27:57.259465 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:27:57.334816 ignition[1038]: INFO : mount: mount passed Jan 22 00:27:57.334816 ignition[1038]: INFO : Ignition finished successfully Jan 22 00:27:57.371728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 22 00:27:57.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:57.455509 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 22 00:27:57.515456 kernel: audit: type=1130 audit(1769041677.434:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:27:57.636924 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:27:57.870315 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Jan 22 00:27:57.903754 kernel: BTRFS info (device vda6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:27:57.904264 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:27:58.014104 kernel: BTRFS info (device vda6): turning on async discard Jan 22 00:27:58.014253 kernel: BTRFS info (device vda6): enabling free space tree Jan 22 00:27:58.020990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:27:58.586096 ignition[1067]: INFO : Ignition 2.22.0 Jan 22 00:27:58.586096 ignition[1067]: INFO : Stage: files Jan 22 00:27:58.586096 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:27:58.586096 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:27:58.694319 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 22 00:27:58.694319 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 22 00:27:58.694319 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 22 00:27:58.694319 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 22 00:27:58.694319 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 22 00:27:58.694319 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 22 00:27:58.668210 unknown[1067]: wrote ssh authorized keys file for user: core Jan 22 00:27:58.855335 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 00:27:58.855335 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 22 00:27:59.223069 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 22 00:27:59.601144 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 22 00:27:59.601144 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 22 00:27:59.666154 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 22 00:27:59.666154 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:27:59.730254 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 22 00:28:00.178913 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 22 00:28:08.554935 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 22 00:28:08.554935 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 22 00:28:08.610902 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 22 00:28:08.759770 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 22 00:28:08.783362 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 22 00:28:08.783362 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 22 00:28:08.783362 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 22 00:28:08.783362 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 22 00:28:08.783362 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:28:09.007336 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:28:09.007336 ignition[1067]: INFO : files: files passed Jan 22 00:28:09.007336 ignition[1067]: INFO : Ignition finished successfully Jan 22 00:28:09.063433 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 22 00:28:09.121341 kernel: audit: type=1130 audit(1769041689.075:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.126314 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 22 00:28:09.141861 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 22 00:28:09.228264 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 22 00:28:09.228437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 22 00:28:09.339806 kernel: audit: type=1130 audit(1769041689.240:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.339888 kernel: audit: type=1131 audit(1769041689.240:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.405370 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Jan 22 00:28:09.428913 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:28:09.428913 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:28:09.499370 kernel: audit: type=1130 audit(1769041689.458:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.499802 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:28:09.438785 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:28:09.459451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 22 00:28:09.536708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 22 00:28:09.768195 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 22 00:28:09.768813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 22 00:28:09.867802 kernel: audit: type=1130 audit(1769041689.786:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.867849 kernel: audit: type=1131 audit(1769041689.786:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:09.790222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 22 00:28:09.838458 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 22 00:28:09.881879 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 22 00:28:09.885203 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 22 00:28:10.012992 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:28:10.053757 kernel: audit: type=1130 audit(1769041690.021:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.031394 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 22 00:28:10.196392 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:28:10.196963 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:28:10.215759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:28:10.234028 systemd[1]: Stopped target timers.target - Timer Units. Jan 22 00:28:10.257154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 22 00:28:10.315843 kernel: audit: type=1131 audit(1769041690.281:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.257350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:28:10.311359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 22 00:28:10.325264 systemd[1]: Stopped target basic.target - Basic System. Jan 22 00:28:10.344213 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 22 00:28:10.364941 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:28:10.384937 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 22 00:28:10.410099 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:28:10.438339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 22 00:28:10.456163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:28:10.477057 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 22 00:28:10.508908 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 22 00:28:10.529842 systemd[1]: Stopped target swap.target - Swaps. Jan 22 00:28:10.605203 kernel: audit: type=1131 audit(1769041690.560:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.545799 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 22 00:28:10.545950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:28:10.600426 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:28:10.627947 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:28:10.667459 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 22 00:28:10.697348 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:28:10.720402 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 22 00:28:10.789870 kernel: audit: type=1131 audit(1769041690.738:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.721189 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 22 00:28:10.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.781419 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 22 00:28:10.782833 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:28:10.799072 systemd[1]: Stopped target paths.target - Path Units. Jan 22 00:28:10.822299 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 22 00:28:10.833056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:28:10.863451 systemd[1]: Stopped target slices.target - Slice Units. Jan 22 00:28:10.876989 systemd[1]: Stopped target sockets.target - Socket Units. Jan 22 00:28:10.896271 systemd[1]: iscsid.socket: Deactivated successfully. Jan 22 00:28:10.896421 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:28:10.918158 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 22 00:28:10.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.920923 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:28:10.942232 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 22 00:28:10.942357 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:28:11.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.961268 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 22 00:28:11.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:10.961794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:28:10.983772 systemd[1]: ignition-files.service: Deactivated successfully. Jan 22 00:28:10.983971 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 22 00:28:11.016349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 22 00:28:11.029014 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 22 00:28:11.046992 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 22 00:28:11.047393 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:28:11.066334 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 22 00:28:11.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.066933 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:28:11.106904 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 22 00:28:11.107101 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:28:11.151775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 22 00:28:11.193033 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 22 00:28:11.303836 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 22 00:28:11.332415 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 22 00:28:11.336794 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 22 00:28:11.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.419870 ignition[1124]: INFO : Ignition 2.22.0 Jan 22 00:28:11.419870 ignition[1124]: INFO : Stage: umount Jan 22 00:28:11.435974 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:28:11.435974 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 22 00:28:11.456994 ignition[1124]: INFO : umount: umount passed Jan 22 00:28:11.456994 ignition[1124]: INFO : Ignition finished successfully Jan 22 00:28:11.482124 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 22 00:28:11.482835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 22 00:28:11.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.502856 systemd[1]: Stopped target network.target - Network. Jan 22 00:28:11.522897 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 22 00:28:11.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.523007 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 22 00:28:11.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.542851 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 22 00:28:11.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.542943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 22 00:28:11.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.567132 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 22 00:28:11.567253 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 22 00:28:11.576271 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 22 00:28:11.576359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 22 00:28:11.597246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 22 00:28:11.597346 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 22 00:28:11.615179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 22 00:28:11.647440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 22 00:28:11.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.743931 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 22 00:28:11.744765 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 22 00:28:11.819000 audit: BPF prog-id=6 op=UNLOAD Jan 22 00:28:11.881280 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 22 00:28:11.887787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 22 00:28:11.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:11.969000 audit: BPF prog-id=9 op=UNLOAD Jan 22 00:28:11.969389 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 22 00:28:11.982990 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 22 00:28:12.000268 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:28:12.042737 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 22 00:28:12.054046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 22 00:28:12.054354 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:28:12.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.098031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 22 00:28:12.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.098126 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:28:12.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.117202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 22 00:28:12.117304 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 22 00:28:12.142148 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:28:12.241315 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 22 00:28:12.253044 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:28:12.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.298854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 22 00:28:12.299096 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 22 00:28:12.306956 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 22 00:28:12.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.307019 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:28:12.328418 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 22 00:28:12.328792 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:28:12.369331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 22 00:28:12.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.369455 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 22 00:28:12.390346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 22 00:28:12.390459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:28:12.490110 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 22 00:28:12.505479 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 22 00:28:12.505879 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:28:12.543333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 22 00:28:12.543716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:28:12.567228 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 22 00:28:12.568138 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:28:12.592711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 22 00:28:12.592838 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:28:12.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.730210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:28:12.731231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:28:12.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.773754 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 22 00:28:12.806261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 22 00:28:12.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.844946 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 22 00:28:12.845325 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 22 00:28:12.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:12.895147 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 22 00:28:12.908066 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 22 00:28:13.004156 systemd[1]: Switching root. Jan 22 00:28:13.069258 systemd-journald[321]: Journal stopped Jan 22 00:28:18.990966 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 22 00:28:18.991065 kernel: SELinux: policy capability network_peer_controls=1 Jan 22 00:28:18.991086 kernel: SELinux: policy capability open_perms=1 Jan 22 00:28:18.991109 kernel: SELinux: policy capability extended_socket_class=1 Jan 22 00:28:18.991126 kernel: SELinux: policy capability always_check_network=0 Jan 22 00:28:18.991143 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 22 00:28:18.991165 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 22 00:28:18.991183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 22 00:28:18.991200 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 22 00:28:18.991216 kernel: SELinux: policy capability userspace_initial_context=0 Jan 22 00:28:18.991247 systemd[1]: Successfully loaded SELinux policy in 326.173ms. Jan 22 00:28:18.991274 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.842ms. Jan 22 00:28:18.991298 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:28:18.991316 systemd[1]: Detected virtualization kvm. Jan 22 00:28:18.991333 systemd[1]: Detected architecture x86-64. Jan 22 00:28:18.991351 systemd[1]: Detected first boot. Jan 22 00:28:18.991368 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 22 00:28:18.991389 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 22 00:28:18.991851 kernel: audit: type=1334 audit(1769041694.127:84): prog-id=10 op=LOAD Jan 22 00:28:18.991876 kernel: audit: type=1334 audit(1769041694.127:85): prog-id=10 op=UNLOAD Jan 22 00:28:18.991896 kernel: audit: type=1334 audit(1769041694.127:86): prog-id=11 op=LOAD Jan 22 00:28:18.991913 kernel: audit: type=1334 audit(1769041694.127:87): prog-id=11 op=UNLOAD Jan 22 00:28:18.991930 zram_generator::config[1170]: No configuration found. Jan 22 00:28:18.992048 kernel: Guest personality initialized and is inactive Jan 22 00:28:18.992066 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 22 00:28:18.992083 kernel: Initialized host personality Jan 22 00:28:18.992191 kernel: NET: Registered PF_VSOCK protocol family Jan 22 00:28:18.992211 systemd[1]: Populated /etc with preset unit settings. Jan 22 00:28:18.992229 kernel: audit: type=1334 audit(1769041696.370:88): prog-id=12 op=LOAD Jan 22 00:28:18.992246 kernel: audit: type=1334 audit(1769041696.370:89): prog-id=3 op=UNLOAD Jan 22 00:28:18.992269 kernel: audit: type=1334 audit(1769041696.370:90): prog-id=13 op=LOAD Jan 22 00:28:18.992290 kernel: audit: type=1334 audit(1769041696.370:91): prog-id=14 op=LOAD Jan 22 00:28:18.992306 kernel: audit: type=1334 audit(1769041696.370:92): prog-id=4 op=UNLOAD Jan 22 00:28:18.992323 kernel: audit: type=1334 audit(1769041696.370:93): prog-id=5 op=UNLOAD Jan 22 00:28:18.992340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 22 00:28:18.992358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 22 00:28:18.992376 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 22 00:28:18.992401 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 22 00:28:18.992423 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 22 00:28:18.992441 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 22 00:28:18.992459 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 22 00:28:18.992480 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 22 00:28:18.992498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 22 00:28:18.992850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 22 00:28:18.992872 systemd[1]: Created slice user.slice - User and Session Slice. Jan 22 00:28:18.992890 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:28:18.992908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:28:18.992926 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 22 00:28:18.992943 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 22 00:28:18.992961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 22 00:28:18.992984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:28:18.993003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 22 00:28:18.993111 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:28:18.993132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:28:18.993150 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 22 00:28:18.993168 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 22 00:28:18.993186 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 22 00:28:18.993208 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 22 00:28:18.993226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:28:18.993243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:28:18.993261 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 22 00:28:18.993279 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:28:18.993298 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:28:18.993316 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 22 00:28:18.993333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 22 00:28:18.993354 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 22 00:28:18.993372 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:28:18.993389 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 22 00:28:18.993407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:28:18.993424 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 22 00:28:18.993442 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 22 00:28:18.993460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:28:18.993481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:28:18.993499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 22 00:28:18.993844 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 22 00:28:18.993866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 22 00:28:18.993884 systemd[1]: Mounting media.mount - External Media Directory... Jan 22 00:28:18.993901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:18.993919 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 22 00:28:18.993942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 22 00:28:18.993960 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 22 00:28:18.993978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 22 00:28:18.993996 systemd[1]: Reached target machines.target - Containers. Jan 22 00:28:18.994015 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 22 00:28:18.994033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:28:18.994055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:28:18.994072 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 22 00:28:18.994091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:28:18.994108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:28:18.994126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:28:18.994143 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 22 00:28:18.994161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:28:18.994183 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 22 00:28:18.994201 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 22 00:28:18.994219 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 22 00:28:18.994236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 22 00:28:18.994254 systemd[1]: Stopped systemd-fsck-usr.service. Jan 22 00:28:18.994272 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:28:18.994289 kernel: fuse: init (API version 7.41) Jan 22 00:28:18.994309 kernel: ACPI: bus type drm_connector registered Jan 22 00:28:18.994327 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:28:18.994344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:28:18.994362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:28:18.994382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 22 00:28:18.994436 systemd-journald[1256]: Collecting audit messages is enabled. Jan 22 00:28:18.994472 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 22 00:28:18.994491 systemd-journald[1256]: Journal started Jan 22 00:28:18.994863 systemd-journald[1256]: Runtime Journal (/run/log/journal/c9edd8c2858c4760bc873f24db2083c5) is 6M, max 48.2M, 42.2M free. Jan 22 00:28:17.557000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 22 00:28:18.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:18.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:18.600000 audit: BPF prog-id=14 op=UNLOAD Jan 22 00:28:18.600000 audit: BPF prog-id=13 op=UNLOAD Jan 22 00:28:18.671000 audit: BPF prog-id=15 op=LOAD Jan 22 00:28:18.740000 audit: BPF prog-id=16 op=LOAD Jan 22 00:28:18.743000 audit: BPF prog-id=17 op=LOAD Jan 22 00:28:18.983000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 22 00:28:18.983000 audit[1256]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe7e64b140 a2=4000 a3=0 items=0 ppid=1 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:28:18.983000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 22 00:28:16.314052 systemd[1]: Queued start job for default target multi-user.target. Jan 22 00:28:16.373738 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 22 00:28:16.377334 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 22 00:28:16.378392 systemd[1]: systemd-journald.service: Consumed 5.283s CPU time. Jan 22 00:28:19.034850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:28:19.068184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:19.087993 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:28:19.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.114456 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 22 00:28:19.129365 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 22 00:28:19.146497 systemd[1]: Mounted media.mount - External Media Directory. Jan 22 00:28:19.159994 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 22 00:28:19.185155 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 22 00:28:19.209186 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 22 00:28:19.245398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 22 00:28:19.285802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:28:19.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.346910 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 22 00:28:19.347005 kernel: audit: type=1130 audit(1769041699.270:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.348047 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 22 00:28:19.349072 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 22 00:28:19.402030 kernel: audit: type=1130 audit(1769041699.346:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.402474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:28:19.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.403156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:28:19.472961 kernel: audit: type=1130 audit(1769041699.400:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.473084 kernel: audit: type=1131 audit(1769041699.400:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.489189 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:28:19.493508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:28:19.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.522283 kernel: audit: type=1130 audit(1769041699.486:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.522361 kernel: audit: type=1131 audit(1769041699.487:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.575483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:28:19.576170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:28:19.619923 kernel: audit: type=1130 audit(1769041699.573:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.638039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 22 00:28:19.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.639402 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 22 00:28:19.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.717824 kernel: audit: type=1131 audit(1769041699.574:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.717967 kernel: audit: type=1130 audit(1769041699.636:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.718000 kernel: audit: type=1131 audit(1769041699.636:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.773047 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:28:19.773861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:28:19.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.801478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:28:19.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.819426 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:28:19.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.839502 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 22 00:28:19.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.857250 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 22 00:28:19.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:19.874996 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:28:19.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.204249 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:28:20.234123 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 22 00:28:20.272143 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 22 00:28:20.320225 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 22 00:28:20.342363 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 22 00:28:20.343074 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:28:20.363013 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 22 00:28:20.379332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:28:20.379975 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:28:20.408086 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 22 00:28:20.433357 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 22 00:28:20.450918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:28:20.455965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 22 00:28:20.471056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:28:20.475199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:28:20.503186 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 22 00:28:20.524149 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:28:20.545474 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 22 00:28:20.561105 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 22 00:28:20.611110 systemd-journald[1256]: Time spent on flushing to /var/log/journal/c9edd8c2858c4760bc873f24db2083c5 is 86.126ms for 1144 entries. Jan 22 00:28:20.611110 systemd-journald[1256]: System Journal (/var/log/journal/c9edd8c2858c4760bc873f24db2083c5) is 8M, max 163.5M, 155.5M free. Jan 22 00:28:20.769395 systemd-journald[1256]: Received client request to flush runtime journal. Jan 22 00:28:20.769469 kernel: loop1: detected capacity change from 0 to 219144 Jan 22 00:28:20.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.615411 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 22 00:28:20.649404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 22 00:28:20.671127 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 22 00:28:20.737407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:28:20.759502 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 22 00:28:20.759897 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 22 00:28:20.774503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:28:20.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.810383 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 22 00:28:20.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.844965 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 22 00:28:20.877342 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 22 00:28:20.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:20.911983 kernel: loop2: detected capacity change from 0 to 111544 Jan 22 00:28:21.051234 kernel: loop3: detected capacity change from 0 to 119256 Jan 22 00:28:21.180945 kernel: loop4: detected capacity change from 0 to 219144 Jan 22 00:28:21.220115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 22 00:28:21.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:21.247000 audit: BPF prog-id=18 op=LOAD Jan 22 00:28:21.247000 audit: BPF prog-id=19 op=LOAD Jan 22 00:28:21.247000 audit: BPF prog-id=20 op=LOAD Jan 22 00:28:21.250379 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 22 00:28:21.266000 audit: BPF prog-id=21 op=LOAD Jan 22 00:28:21.270384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:28:21.302338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:28:21.328087 kernel: loop5: detected capacity change from 0 to 111544 Jan 22 00:28:21.353000 audit: BPF prog-id=22 op=LOAD Jan 22 00:28:21.354000 audit: BPF prog-id=23 op=LOAD Jan 22 00:28:21.354000 audit: BPF prog-id=24 op=LOAD Jan 22 00:28:21.357961 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 22 00:28:21.392951 kernel: loop6: detected capacity change from 0 to 119256 Jan 22 00:28:21.395000 audit: BPF prog-id=25 op=LOAD Jan 22 00:28:21.396000 audit: BPF prog-id=26 op=LOAD Jan 22 00:28:21.396000 audit: BPF prog-id=27 op=LOAD Jan 22 00:28:21.438047 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 22 00:28:21.502238 (sd-merge)[1312]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 22 00:28:21.543882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 22 00:28:21.568125 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 22 00:28:21.569039 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 22 00:28:21.574227 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 22 00:28:21.587292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:28:21.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:21.660005 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Jan 22 00:28:21.660039 systemd[1]: Reloading... Jan 22 00:28:21.778905 systemd-nsresourced[1317]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 22 00:28:21.867829 zram_generator::config[1356]: No configuration found. Jan 22 00:28:22.168488 systemd-oomd[1314]: No swap; memory pressure usage will be degraded Jan 22 00:28:22.266098 systemd-resolved[1315]: Positive Trust Anchors: Jan 22 00:28:22.266218 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:28:22.266227 systemd-resolved[1315]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:28:22.266271 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:28:22.281321 systemd-resolved[1315]: Defaulting to hostname 'linux'. Jan 22 00:28:22.755288 systemd[1]: Reloading finished in 1094 ms. Jan 22 00:28:24.178318 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 22 00:28:24.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.250392 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 22 00:28:24.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.284106 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 22 00:28:24.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.328508 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 22 00:28:24.329175 kernel: audit: type=1130 audit(1769041704.318:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.328007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:28:24.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.429353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 22 00:28:24.437932 kernel: audit: type=1130 audit(1769041704.381:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.510005 kernel: audit: type=1130 audit(1769041704.460:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:24.540056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:28:24.588214 systemd[1]: Starting ensure-sysext.service... Jan 22 00:28:24.614410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:28:24.631000 audit: BPF prog-id=28 op=LOAD Jan 22 00:28:24.644405 kernel: audit: type=1334 audit(1769041704.631:149): prog-id=28 op=LOAD Jan 22 00:28:24.649935 kernel: audit: type=1334 audit(1769041704.631:150): prog-id=25 op=UNLOAD Jan 22 00:28:24.631000 audit: BPF prog-id=25 op=UNLOAD Jan 22 00:28:24.631000 audit: BPF prog-id=29 op=LOAD Jan 22 00:28:24.631000 audit: BPF prog-id=30 op=LOAD Jan 22 00:28:24.675029 kernel: audit: type=1334 audit(1769041704.631:151): prog-id=29 op=LOAD Jan 22 00:28:24.710992 kernel: audit: type=1334 audit(1769041704.631:152): prog-id=30 op=LOAD Jan 22 00:28:24.711024 kernel: audit: type=1334 audit(1769041704.631:153): prog-id=26 op=UNLOAD Jan 22 00:28:24.711053 kernel: audit: type=1334 audit(1769041704.631:154): prog-id=27 op=UNLOAD Jan 22 00:28:24.711195 kernel: audit: type=1334 audit(1769041704.631:155): prog-id=31 op=LOAD Jan 22 00:28:24.631000 audit: BPF prog-id=26 op=UNLOAD Jan 22 00:28:24.631000 audit: BPF prog-id=27 op=UNLOAD Jan 22 00:28:24.631000 audit: BPF prog-id=31 op=LOAD Jan 22 00:28:24.672000 audit: BPF prog-id=15 op=UNLOAD Jan 22 00:28:24.672000 audit: BPF prog-id=32 op=LOAD Jan 22 00:28:24.672000 audit: BPF prog-id=33 op=LOAD Jan 22 00:28:24.672000 audit: BPF prog-id=16 op=UNLOAD Jan 22 00:28:24.672000 audit: BPF prog-id=17 op=UNLOAD Jan 22 00:28:24.702000 audit: BPF prog-id=34 op=LOAD Jan 22 00:28:24.702000 audit: BPF prog-id=18 op=UNLOAD Jan 22 00:28:24.707000 audit: BPF prog-id=35 op=LOAD Jan 22 00:28:24.707000 audit: BPF prog-id=36 op=LOAD Jan 22 00:28:24.707000 audit: BPF prog-id=19 op=UNLOAD Jan 22 00:28:24.707000 audit: BPF prog-id=20 op=UNLOAD Jan 22 00:28:24.734000 audit: BPF prog-id=37 op=LOAD Jan 22 00:28:24.734000 audit: BPF prog-id=22 op=UNLOAD Jan 22 00:28:24.735000 audit: BPF prog-id=38 op=LOAD Jan 22 00:28:24.735000 audit: BPF prog-id=39 op=LOAD Jan 22 00:28:24.735000 audit: BPF prog-id=23 op=UNLOAD Jan 22 00:28:24.735000 audit: BPF prog-id=24 op=UNLOAD Jan 22 00:28:24.747000 audit: BPF prog-id=40 op=LOAD Jan 22 00:28:24.747000 audit: BPF prog-id=21 op=UNLOAD Jan 22 00:28:24.804990 systemd[1]: Reload requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Jan 22 00:28:24.805439 systemd[1]: Reloading... Jan 22 00:28:24.914939 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 22 00:28:24.915922 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 22 00:28:24.917228 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 22 00:28:24.924469 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 22 00:28:24.924952 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 22 00:28:24.981254 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:28:24.982969 systemd-tmpfiles[1399]: Skipping /boot Jan 22 00:28:25.056931 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:28:25.056956 systemd-tmpfiles[1399]: Skipping /boot Jan 22 00:28:25.808166 zram_generator::config[1428]: No configuration found. Jan 22 00:28:27.113142 systemd[1]: Reloading finished in 2306 ms. Jan 22 00:28:27.146182 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 22 00:28:27.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.215310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:28:27.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.261000 audit: BPF prog-id=41 op=LOAD Jan 22 00:28:27.261000 audit: BPF prog-id=37 op=UNLOAD Jan 22 00:28:27.261000 audit: BPF prog-id=42 op=LOAD Jan 22 00:28:27.261000 audit: BPF prog-id=43 op=LOAD Jan 22 00:28:27.261000 audit: BPF prog-id=38 op=UNLOAD Jan 22 00:28:27.261000 audit: BPF prog-id=39 op=UNLOAD Jan 22 00:28:27.272000 audit: BPF prog-id=44 op=LOAD Jan 22 00:28:27.272000 audit: BPF prog-id=40 op=UNLOAD Jan 22 00:28:27.275000 audit: BPF prog-id=45 op=LOAD Jan 22 00:28:27.275000 audit: BPF prog-id=28 op=UNLOAD Jan 22 00:28:27.275000 audit: BPF prog-id=46 op=LOAD Jan 22 00:28:27.276000 audit: BPF prog-id=47 op=LOAD Jan 22 00:28:27.276000 audit: BPF prog-id=29 op=UNLOAD Jan 22 00:28:27.276000 audit: BPF prog-id=30 op=UNLOAD Jan 22 00:28:27.278000 audit: BPF prog-id=48 op=LOAD Jan 22 00:28:27.278000 audit: BPF prog-id=34 op=UNLOAD Jan 22 00:28:27.278000 audit: BPF prog-id=49 op=LOAD Jan 22 00:28:27.279000 audit: BPF prog-id=50 op=LOAD Jan 22 00:28:27.279000 audit: BPF prog-id=35 op=UNLOAD Jan 22 00:28:27.279000 audit: BPF prog-id=36 op=UNLOAD Jan 22 00:28:27.281000 audit: BPF prog-id=51 op=LOAD Jan 22 00:28:27.281000 audit: BPF prog-id=31 op=UNLOAD Jan 22 00:28:27.281000 audit: BPF prog-id=52 op=LOAD Jan 22 00:28:27.282000 audit: BPF prog-id=53 op=LOAD Jan 22 00:28:27.282000 audit: BPF prog-id=32 op=UNLOAD Jan 22 00:28:27.282000 audit: BPF prog-id=33 op=UNLOAD Jan 22 00:28:27.337957 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:28:27.377300 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 22 00:28:27.423940 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 22 00:28:27.461312 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 22 00:28:27.497000 audit: BPF prog-id=8 op=UNLOAD Jan 22 00:28:27.497000 audit: BPF prog-id=7 op=UNLOAD Jan 22 00:28:27.502000 audit: BPF prog-id=54 op=LOAD Jan 22 00:28:27.502000 audit: BPF prog-id=55 op=LOAD Jan 22 00:28:27.513882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:28:27.562900 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 22 00:28:27.609761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.610186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:28:27.629359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:28:27.670197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:28:27.708949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:28:27.721484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:28:27.722245 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:28:27.722379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:28:27.722925 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.735029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.735202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:28:27.735402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:28:27.736805 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:28:27.736949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:28:27.737028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.739929 systemd-udevd[1480]: Using default interface naming scheme 'v257'. Jan 22 00:28:27.747288 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.747892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:28:27.752000 audit[1482]: SYSTEM_BOOT pid=1482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.753094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:28:27.803488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:28:27.804958 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:28:27.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.805120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:28:27.805302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:28:27.827124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:28:27.828023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:28:27.834237 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:28:27.834903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:28:27.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.866070 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 22 00:28:27.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.896319 systemd[1]: Finished ensure-sysext.service. Jan 22 00:28:27.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:27.912948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:28:27.914067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:28:27.923000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 00:28:27.923000 audit[1501]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd07a04cc0 a2=420 a3=0 items=0 ppid=1470 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:28:27.923000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:28:27.925496 augenrules[1501]: No rules Jan 22 00:28:27.931013 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:28:27.931375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:28:27.950319 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:28:27.951117 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:28:27.975144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:28:28.043275 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 22 00:28:28.071485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:28:28.085020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:28:28.085122 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:28:28.091424 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 22 00:28:28.271278 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 22 00:28:28.334494 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 22 00:28:28.384784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 22 00:28:28.457446 kernel: mousedev: PS/2 mouse device common for all mice Jan 22 00:28:28.603836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 22 00:28:28.753120 kernel: ACPI: button: Power Button [PWRF] Jan 22 00:28:28.780754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 22 00:28:28.817185 systemd[1]: Reached target time-set.target - System Time Set. Jan 22 00:28:28.863459 systemd-networkd[1529]: lo: Link UP Jan 22 00:28:28.865502 systemd-networkd[1529]: lo: Gained carrier Jan 22 00:28:28.898159 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 22 00:28:28.898973 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 22 00:28:28.880346 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:28:28.880354 systemd-networkd[1529]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:28:28.885724 systemd-networkd[1529]: eth0: Link UP Jan 22 00:28:28.891340 systemd-networkd[1529]: eth0: Gained carrier Jan 22 00:28:28.891363 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:28:28.934184 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:28:28.966828 systemd-networkd[1529]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 22 00:28:28.973015 systemd-timesyncd[1530]: Network configuration changed, trying to establish connection. Jan 22 00:28:29.932759 systemd-timesyncd[1530]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 22 00:28:29.933126 systemd-timesyncd[1530]: Initial clock synchronization to Thu 2026-01-22 00:28:29.932556 UTC. Jan 22 00:28:29.933545 systemd-resolved[1315]: Clock change detected. Flushing caches. Jan 22 00:28:29.978407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 22 00:28:29.996226 systemd[1]: Reached target network.target - Network. Jan 22 00:28:30.103675 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 22 00:28:30.178785 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 22 00:28:30.205620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 22 00:28:30.363646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:28:30.387616 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 22 00:28:30.420581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 22 00:28:31.171720 systemd-networkd[1529]: eth0: Gained IPv6LL Jan 22 00:28:31.212308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 22 00:28:31.232399 systemd[1]: Reached target network-online.target - Network is Online. Jan 22 00:28:32.253350 kernel: kvm_amd: TSC scaling supported Jan 22 00:28:32.255175 kernel: kvm_amd: Nested Virtualization enabled Jan 22 00:28:32.255226 kernel: kvm_amd: Nested Paging enabled Jan 22 00:28:32.257495 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 22 00:28:32.258535 kernel: kvm_amd: PMU virtualization is disabled Jan 22 00:28:33.074319 kernel: EDAC MC: Ver: 3.0.0 Jan 22 00:28:33.161238 ldconfig[1473]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 22 00:28:33.192442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 22 00:28:33.364357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:28:33.426135 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 22 00:28:33.532594 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 22 00:28:33.560454 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:28:33.594254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 22 00:28:33.668078 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 22 00:28:33.686090 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 22 00:28:33.704503 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 22 00:28:33.720619 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 22 00:28:33.747333 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 22 00:28:33.764790 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 22 00:28:33.780415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 22 00:28:33.801552 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 22 00:28:33.802731 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:28:33.822431 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:28:33.852265 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 22 00:28:33.884620 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 22 00:28:33.915385 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 22 00:28:33.935604 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 22 00:28:33.961693 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 22 00:28:34.009229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 22 00:28:34.027314 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 22 00:28:34.049673 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 22 00:28:34.073254 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:28:34.100369 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:28:34.117700 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:28:34.120306 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:28:34.128217 systemd[1]: Starting containerd.service - containerd container runtime... Jan 22 00:28:34.158121 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 22 00:28:34.180608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 22 00:28:34.196238 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 22 00:28:34.219244 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 22 00:28:34.254490 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 22 00:28:34.269597 jq[1589]: false Jan 22 00:28:34.269362 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 22 00:28:34.273146 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 22 00:28:34.360460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:28:34.390170 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 22 00:28:34.414529 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 22 00:28:34.441703 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Jan 22 00:28:34.444739 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Jan 22 00:28:34.445309 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 22 00:28:34.469204 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 22 00:28:34.492329 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Jan 22 00:28:34.492423 oslogin_cache_refresh[1591]: Failure getting users, quitting Jan 22 00:28:34.493438 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:28:34.496611 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:28:34.496611 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Jan 22 00:28:34.493523 oslogin_cache_refresh[1591]: Refreshing group entry cache Jan 22 00:28:34.499177 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 22 00:28:34.513492 extend-filesystems[1590]: Found /dev/vda6 Jan 22 00:28:34.549456 extend-filesystems[1590]: Found /dev/vda9 Jan 22 00:28:34.562678 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Jan 22 00:28:34.562678 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:28:34.524133 oslogin_cache_refresh[1591]: Failure getting groups, quitting Jan 22 00:28:34.561474 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 22 00:28:34.563341 extend-filesystems[1590]: Checking size of /dev/vda9 Jan 22 00:28:34.524156 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:28:34.568471 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 22 00:28:34.571196 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 22 00:28:34.575340 systemd[1]: Starting update-engine.service - Update Engine... Jan 22 00:28:34.613239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 22 00:28:34.653476 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 22 00:28:34.671649 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 22 00:28:34.672425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 22 00:28:34.673417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 22 00:28:34.675263 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 22 00:28:34.707295 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 22 00:28:34.709208 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 22 00:28:34.748454 jq[1614]: true Jan 22 00:28:34.777422 systemd[1]: motdgen.service: Deactivated successfully. Jan 22 00:28:34.778300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 22 00:28:34.819574 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 22 00:28:34.882649 extend-filesystems[1590]: Resized partition /dev/vda9 Jan 22 00:28:34.925677 tar[1622]: linux-amd64/LICENSE Jan 22 00:28:34.927621 tar[1622]: linux-amd64/helm Jan 22 00:28:34.942676 update_engine[1610]: I20260122 00:28:34.941768 1610 main.cc:92] Flatcar Update Engine starting Jan 22 00:28:34.945725 jq[1634]: true Jan 22 00:28:34.968723 extend-filesystems[1651]: resize2fs 1.47.3 (8-Jul-2025) Jan 22 00:28:35.127582 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 22 00:28:35.128541 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 22 00:28:35.159591 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 22 00:28:35.165752 dbus-daemon[1587]: [system] SELinux support is enabled Jan 22 00:28:35.167545 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 22 00:28:35.185207 update_engine[1610]: I20260122 00:28:35.184572 1610 update_check_scheduler.cc:74] Next update check in 7m17s Jan 22 00:28:35.198168 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 22 00:28:35.198324 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 22 00:28:35.238589 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 22 00:28:35.266246 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 22 00:28:35.266386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 22 00:28:35.379423 systemd[1]: Started update-engine.service - Update Engine. Jan 22 00:28:35.440463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 22 00:28:36.155542 sshd_keygen[1638]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 22 00:28:36.212687 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 22 00:28:36.216584 extend-filesystems[1651]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 22 00:28:36.216584 extend-filesystems[1651]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 22 00:28:36.216584 extend-filesystems[1651]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 22 00:28:36.317328 extend-filesystems[1590]: Resized filesystem in /dev/vda9 Jan 22 00:28:36.359631 bash[1672]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:28:36.228410 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 22 00:28:36.279675 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 22 00:28:36.327147 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 22 00:28:36.362558 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 22 00:28:36.366252 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 22 00:28:36.383577 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Jan 22 00:28:36.383619 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 22 00:28:36.392243 systemd-logind[1609]: New seat seat0. Jan 22 00:28:36.392262 locksmithd[1661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 22 00:28:36.397112 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 22 00:28:36.419537 systemd[1]: Started systemd-logind.service - User Login Management. Jan 22 00:28:36.482391 systemd[1]: issuegen.service: Deactivated successfully. Jan 22 00:28:36.483184 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 22 00:28:36.505646 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 22 00:28:37.010261 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 22 00:28:37.042394 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 22 00:28:37.096755 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 22 00:28:37.183527 systemd[1]: Reached target getty.target - Login Prompts. Jan 22 00:28:39.169759 containerd[1635]: time="2026-01-22T00:28:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 22 00:28:39.177363 containerd[1635]: time="2026-01-22T00:28:39.177324227Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 22 00:28:39.329539 containerd[1635]: time="2026-01-22T00:28:39.329469370Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.936µs" Jan 22 00:28:39.330642 containerd[1635]: time="2026-01-22T00:28:39.329704189Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 22 00:28:39.330642 containerd[1635]: time="2026-01-22T00:28:39.329772717Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 22 00:28:39.330642 containerd[1635]: time="2026-01-22T00:28:39.329791261Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 22 00:28:39.331723 containerd[1635]: time="2026-01-22T00:28:39.331500855Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 22 00:28:39.331723 containerd[1635]: time="2026-01-22T00:28:39.331615530Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:28:39.332292 containerd[1635]: time="2026-01-22T00:28:39.332006028Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:28:39.332292 containerd[1635]: time="2026-01-22T00:28:39.332131302Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.333295 containerd[1635]: time="2026-01-22T00:28:39.332789671Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.333295 containerd[1635]: time="2026-01-22T00:28:39.333169310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:28:39.333295 containerd[1635]: time="2026-01-22T00:28:39.333186812Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:28:39.333295 containerd[1635]: time="2026-01-22T00:28:39.333199727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.433377 containerd[1635]: time="2026-01-22T00:28:39.430445365Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.433377 containerd[1635]: time="2026-01-22T00:28:39.430690703Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 22 00:28:39.436356 containerd[1635]: time="2026-01-22T00:28:39.434183804Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.436356 containerd[1635]: time="2026-01-22T00:28:39.436182825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.436356 containerd[1635]: time="2026-01-22T00:28:39.436239871Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:28:39.436356 containerd[1635]: time="2026-01-22T00:28:39.436257064Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 22 00:28:39.437329 containerd[1635]: time="2026-01-22T00:28:39.436482494Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 22 00:28:39.439662 containerd[1635]: time="2026-01-22T00:28:39.439458028Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 22 00:28:39.442450 containerd[1635]: time="2026-01-22T00:28:39.442007557Z" level=info msg="metadata content store policy set" policy=shared Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.479773272Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.480193987Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.480460185Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.480477797Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.480618410Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.480743613Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481118744Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481135195Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481148609Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481162055Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481173366Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481183886Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481192632Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 22 00:28:39.485151 containerd[1635]: time="2026-01-22T00:28:39.481494375Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.482345814Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.482734500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.482766058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.482784854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483217091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483240765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483261593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483281581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483299183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483319652Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483554660Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483594996Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483697407Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.483744926Z" level=info msg="Start snapshots syncer" Jan 22 00:28:39.493435 containerd[1635]: time="2026-01-22T00:28:39.484262161Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 22 00:28:39.494158 containerd[1635]: time="2026-01-22T00:28:39.486767798Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 22 00:28:39.494158 containerd[1635]: time="2026-01-22T00:28:39.487367728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.487620760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488310047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488454577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488474544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488491295Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488509659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488526591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488540457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488557378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488575122Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488669638Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488684846Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:28:39.498517 containerd[1635]: time="2026-01-22T00:28:39.488696297Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488708240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488718769Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488730111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488744217Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488762060Z" level=info msg="runtime interface created" Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488771939Z" level=info msg="created NRI interface" Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.488783310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.489189629Z" level=info msg="Connect containerd service" Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.489222891Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 22 00:28:39.499200 containerd[1635]: time="2026-01-22T00:28:39.493129833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:28:40.626009 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 22 00:28:40.664380 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:60240.service - OpenSSH per-connection server daemon (10.0.0.1:60240). Jan 22 00:28:41.009491 tar[1622]: linux-amd64/README.md Jan 22 00:28:41.114573 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 22 00:28:41.674518 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 60240 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:41.685413 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:41.725635 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 22 00:28:41.745597 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 22 00:28:41.793462 systemd-logind[1609]: New session 1 of user core. Jan 22 00:28:41.892280 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 22 00:28:41.922458 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 22 00:28:42.092603 containerd[1635]: time="2026-01-22T00:28:42.092175378Z" level=info msg="Start subscribing containerd event" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.099188508Z" level=info msg="Start recovering state" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100471875Z" level=info msg="Start event monitor" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100501259Z" level=info msg="Start cni network conf syncer for default" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100621924Z" level=info msg="Start streaming server" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100674733Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100685924Z" level=info msg="runtime interface starting up..." Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100694300Z" level=info msg="starting plugins..." Jan 22 00:28:42.102595 containerd[1635]: time="2026-01-22T00:28:42.100716070Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 22 00:28:42.094323 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 22 00:28:42.107425 systemd-logind[1609]: New session c1 of user core. Jan 22 00:28:42.117234 containerd[1635]: time="2026-01-22T00:28:42.116384516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 22 00:28:42.117234 containerd[1635]: time="2026-01-22T00:28:42.116704923Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 22 00:28:42.118331 containerd[1635]: time="2026-01-22T00:28:42.117543539Z" level=info msg="containerd successfully booted in 2.953074s" Jan 22 00:28:42.118559 systemd[1]: Started containerd.service - containerd container runtime. Jan 22 00:28:42.927741 systemd[1727]: Queued start job for default target default.target. Jan 22 00:28:42.984631 systemd[1727]: Created slice app.slice - User Application Slice. Jan 22 00:28:42.984688 systemd[1727]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 22 00:28:42.984712 systemd[1727]: Reached target paths.target - Paths. Jan 22 00:28:43.030526 systemd[1727]: Reached target timers.target - Timers. Jan 22 00:28:43.096149 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 22 00:28:43.101411 systemd[1727]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 22 00:28:43.167637 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 22 00:28:43.169432 systemd[1727]: Reached target sockets.target - Sockets. Jan 22 00:28:43.254156 systemd[1727]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 22 00:28:43.254450 systemd[1727]: Reached target basic.target - Basic System. Jan 22 00:28:43.254553 systemd[1727]: Reached target default.target - Main User Target. Jan 22 00:28:43.254615 systemd[1727]: Startup finished in 1.069s. Jan 22 00:28:43.256299 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 22 00:28:43.279763 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 22 00:28:43.376718 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:60252.service - OpenSSH per-connection server daemon (10.0.0.1:60252). Jan 22 00:28:43.744350 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 60252 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:43.748497 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:43.770261 systemd-logind[1609]: New session 2 of user core. Jan 22 00:28:43.793364 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 22 00:28:43.866234 sshd[1747]: Connection closed by 10.0.0.1 port 60252 Jan 22 00:28:43.870561 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:43.907779 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:60252.service: Deactivated successfully. Jan 22 00:28:43.917507 systemd[1]: session-2.scope: Deactivated successfully. Jan 22 00:28:43.926017 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Jan 22 00:28:43.933947 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:60262.service - OpenSSH per-connection server daemon (10.0.0.1:60262). Jan 22 00:28:43.945478 systemd-logind[1609]: Removed session 2. Jan 22 00:28:44.429145 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 60262 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:44.440474 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:44.506459 systemd-logind[1609]: New session 3 of user core. Jan 22 00:28:44.532685 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 22 00:28:44.807960 sshd[1756]: Connection closed by 10.0.0.1 port 60262 Jan 22 00:28:44.811449 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:44.836595 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:60262.service: Deactivated successfully. Jan 22 00:28:44.847414 systemd[1]: session-3.scope: Deactivated successfully. Jan 22 00:28:44.853454 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Jan 22 00:28:44.859430 systemd-logind[1609]: Removed session 3. Jan 22 00:28:47.712218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:28:47.716059 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 22 00:28:47.717355 systemd[1]: Startup finished in 16.769s (kernel) + 40.117s (initrd) + 33.513s (userspace) = 1min 30.400s. Jan 22 00:28:47.760674 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:28:51.664650 kubelet[1767]: E0122 00:28:51.662671 1767 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:28:51.671626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:28:51.672570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:28:51.676366 systemd[1]: kubelet.service: Consumed 8.933s CPU time, 259.5M memory peak. Jan 22 00:28:55.094089 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:53414.service - OpenSSH per-connection server daemon (10.0.0.1:53414). Jan 22 00:28:55.610529 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 53414 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:55.615687 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:55.640057 systemd-logind[1609]: New session 4 of user core. Jan 22 00:28:55.669593 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 22 00:28:55.711506 sshd[1779]: Connection closed by 10.0.0.1 port 53414 Jan 22 00:28:55.711435 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:55.741465 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:53414.service: Deactivated successfully. Jan 22 00:28:55.746749 systemd[1]: session-4.scope: Deactivated successfully. Jan 22 00:28:55.756472 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Jan 22 00:28:55.764589 systemd-logind[1609]: Removed session 4. Jan 22 00:28:55.768793 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Jan 22 00:28:55.877483 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:55.882425 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:55.900497 systemd-logind[1609]: New session 5 of user core. Jan 22 00:28:55.912433 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 22 00:28:56.039617 sshd[1788]: Connection closed by 10.0.0.1 port 53430 Jan 22 00:28:56.041647 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:56.063128 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:53430.service: Deactivated successfully. Jan 22 00:28:56.067068 systemd[1]: session-5.scope: Deactivated successfully. Jan 22 00:28:56.069274 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Jan 22 00:28:56.077768 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:53432.service - OpenSSH per-connection server daemon (10.0.0.1:53432). Jan 22 00:28:56.082501 systemd-logind[1609]: Removed session 5. Jan 22 00:28:56.214133 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 53432 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:56.217450 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:56.234467 systemd-logind[1609]: New session 6 of user core. Jan 22 00:28:56.246669 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 22 00:28:56.288162 sshd[1797]: Connection closed by 10.0.0.1 port 53432 Jan 22 00:28:56.288160 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:56.323538 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:53432.service: Deactivated successfully. Jan 22 00:28:56.327328 systemd[1]: session-6.scope: Deactivated successfully. Jan 22 00:28:56.542320 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Jan 22 00:28:56.614309 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:53446.service - OpenSSH per-connection server daemon (10.0.0.1:53446). Jan 22 00:28:56.623269 systemd-logind[1609]: Removed session 6. Jan 22 00:28:56.886058 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 53446 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:56.888526 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:56.901747 systemd-logind[1609]: New session 7 of user core. Jan 22 00:28:56.912351 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 22 00:28:56.974156 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 22 00:28:56.974744 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:28:57.015725 sudo[1807]: pam_unix(sudo:session): session closed for user root Jan 22 00:28:57.020457 sshd[1806]: Connection closed by 10.0.0.1 port 53446 Jan 22 00:28:57.021163 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:57.040031 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:53446.service: Deactivated successfully. Jan 22 00:28:57.043701 systemd[1]: session-7.scope: Deactivated successfully. Jan 22 00:28:57.047670 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Jan 22 00:28:57.062268 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:53462.service - OpenSSH per-connection server daemon (10.0.0.1:53462). Jan 22 00:28:57.063601 systemd-logind[1609]: Removed session 7. Jan 22 00:28:57.167702 sshd[1813]: Accepted publickey for core from 10.0.0.1 port 53462 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:57.170425 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:57.181526 systemd-logind[1609]: New session 8 of user core. Jan 22 00:28:57.191285 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 22 00:28:57.219302 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 22 00:28:57.219711 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:28:57.242487 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 22 00:28:57.257603 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 22 00:28:57.258385 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:28:57.330790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:28:57.456000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:28:57.457663 augenrules[1840]: No rules Jan 22 00:28:57.460514 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:28:57.461307 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:28:57.462608 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 22 00:28:57.462693 kernel: audit: type=1305 audit(1769041737.456:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:28:57.463970 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 22 00:28:57.468457 sshd[1816]: Connection closed by 10.0.0.1 port 53462 Jan 22 00:28:57.470287 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 22 00:28:57.456000 audit[1840]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb7e31b30 a2=420 a3=0 items=0 ppid=1821 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:28:57.499633 kernel: audit: type=1300 audit(1769041737.456:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb7e31b30 a2=420 a3=0 items=0 ppid=1821 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:28:57.499774 kernel: audit: type=1327 audit(1769041737.456:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:28:57.456000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:28:57.509118 kernel: audit: type=1130 audit(1769041737.460:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.524491 kernel: audit: type=1131 audit(1769041737.460:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.463000 audit[1817]: USER_END pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.558751 kernel: audit: type=1106 audit(1769041737.463:218): pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.559153 kernel: audit: type=1104 audit(1769041737.463:219): pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.463000 audit[1817]: CRED_DISP pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.577284 kernel: audit: type=1106 audit(1769041737.472:220): pid=1813 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.472000 audit[1813]: USER_END pid=1813 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.472000 audit[1813]: CRED_DISP pid=1813 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.620553 kernel: audit: type=1104 audit(1769041737.472:221): pid=1813 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.629145 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:53462.service: Deactivated successfully. Jan 22 00:28:57.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.25:22-10.0.0.1:53462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.632992 systemd[1]: session-8.scope: Deactivated successfully. Jan 22 00:28:57.637038 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Jan 22 00:28:57.639173 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:53474.service - OpenSSH per-connection server daemon (10.0.0.1:53474). Jan 22 00:28:57.641606 systemd-logind[1609]: Removed session 8. Jan 22 00:28:57.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.657034 kernel: audit: type=1131 audit(1769041737.628:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.25:22-10.0.0.1:53462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.802000 audit[1849]: USER_ACCT pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.804420 sshd[1849]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:28:57.830000 audit[1849]: CRED_ACQ pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.846000 audit[1849]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeecd30c50 a2=3 a3=0 items=0 ppid=1 pid=1849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:28:57.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:28:57.848720 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:28:57.888351 systemd-logind[1609]: New session 9 of user core. Jan 22 00:28:57.924616 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 22 00:28:57.939000 audit[1849]: USER_START pid=1849 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.946000 audit[1852]: CRED_ACQ pid=1852 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:28:57.976000 audit[1853]: USER_ACCT pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.977000 audit[1853]: CRED_REFR pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:28:57.978076 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 22 00:28:57.979111 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:28:57.989000 audit[1853]: USER_START pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:29:02.590660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 22 00:29:02.601696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:29:04.822571 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 22 00:29:04.887370 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 22 00:29:05.426201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:29:05.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:05.437091 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 22 00:29:05.437217 kernel: audit: type=1130 audit(1769041745.428:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:05.473288 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:29:06.196129 kubelet[1883]: E0122 00:29:06.195152 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:29:06.205197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:29:06.205655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:29:06.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:06.207519 systemd[1]: kubelet.service: Consumed 2.319s CPU time, 110.4M memory peak. Jan 22 00:29:06.230126 kernel: audit: type=1131 audit(1769041746.206:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:07.619640 dockerd[1877]: time="2026-01-22T00:29:07.618729816Z" level=info msg="Starting up" Jan 22 00:29:07.634144 dockerd[1877]: time="2026-01-22T00:29:07.631096272Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 22 00:29:07.707097 dockerd[1877]: time="2026-01-22T00:29:07.706701629Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 22 00:29:07.827450 systemd[1]: var-lib-docker-metacopy\x2dcheck2242263953-merged.mount: Deactivated successfully. Jan 22 00:29:07.957228 dockerd[1877]: time="2026-01-22T00:29:07.950219990Z" level=info msg="Loading containers: start." Jan 22 00:29:08.007226 kernel: Initializing XFRM netlink socket Jan 22 00:29:09.708000 audit[1945]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.708000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4d6951c0 a2=0 a3=0 items=0 ppid=1877 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.733440 kernel: audit: type=1325 audit(1769041749.708:234): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:29:09.801441 kernel: audit: type=1300 audit(1769041749.708:234): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4d6951c0 a2=0 a3=0 items=0 ppid=1877 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.804355 kernel: audit: type=1327 audit(1769041749.708:234): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:29:09.804471 kernel: audit: type=1325 audit(1769041749.784:235): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.784000 audit[1947]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.784000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd359e7ed0 a2=0 a3=0 items=0 ppid=1877 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.882730 kernel: audit: type=1300 audit(1769041749.784:235): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd359e7ed0 a2=0 a3=0 items=0 ppid=1877 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.884521 kernel: audit: type=1327 audit(1769041749.784:235): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:29:09.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:29:09.891072 kernel: audit: type=1325 audit(1769041749.879:236): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.879000 audit[1949]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.879000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebe5ae880 a2=0 a3=0 items=0 ppid=1877 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.965182 kernel: audit: type=1300 audit(1769041749.879:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebe5ae880 a2=0 a3=0 items=0 ppid=1877 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:29:09.908000 audit[1951]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.908000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff4c0a290 a2=0 a3=0 items=0 ppid=1877 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:29:09.931000 audit[1953]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.931000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcbb0cec10 a2=0 a3=0 items=0 ppid=1877 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:29:09.997000 audit[1955]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:09.997000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc8cf6d720 a2=0 a3=0 items=0 ppid=1877 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:09.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:29:10.081000 audit[1957]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.081000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcead5e920 a2=0 a3=0 items=0 ppid=1877 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:29:10.140000 audit[1959]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.140000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcfc8e5880 a2=0 a3=0 items=0 ppid=1877 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:29:10.407000 audit[1962]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.407000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff787eff10 a2=0 a3=0 items=0 ppid=1877 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 22 00:29:10.488000 audit[1964]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.496714 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 22 00:29:10.524271 kernel: audit: type=1325 audit(1769041750.488:243): table=filter:11 family=2 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.535334 kernel: audit: type=1300 audit(1769041750.488:243): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc795314e0 a2=0 a3=0 items=0 ppid=1877 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.589495 kernel: audit: type=1327 audit(1769041750.488:243): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:29:10.488000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc795314e0 a2=0 a3=0 items=0 ppid=1877 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.488000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:29:10.601000 audit[1966]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.601000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffee62cb7e0 a2=0 a3=0 items=0 ppid=1877 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.672061 kernel: audit: type=1325 audit(1769041750.601:244): table=filter:12 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.673644 kernel: audit: type=1300 audit(1769041750.601:244): arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffee62cb7e0 a2=0 a3=0 items=0 ppid=1877 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.673703 kernel: audit: type=1327 audit(1769041750.601:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:29:10.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:29:10.658000 audit[1968]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.658000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd63314c50 a2=0 a3=0 items=0 ppid=1877 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.774452 kernel: audit: type=1325 audit(1769041750.658:245): table=filter:13 family=2 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.777123 kernel: audit: type=1300 audit(1769041750.658:245): arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd63314c50 a2=0 a3=0 items=0 ppid=1877 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.777255 kernel: audit: type=1327 audit(1769041750.658:245): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:29:10.658000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:29:10.794393 kernel: audit: type=1325 audit(1769041750.680:246): table=filter:14 family=2 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.680000 audit[1970]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:10.680000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffddf122c70 a2=0 a3=0 items=0 ppid=1877 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:10.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:29:11.625000 audit[2000]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.625000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdeaf0d2f0 a2=0 a3=0 items=0 ppid=1877 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:29:11.708000 audit[2002]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.708000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdf67460e0 a2=0 a3=0 items=0 ppid=1877 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:29:11.802000 audit[2004]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.802000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff00954390 a2=0 a3=0 items=0 ppid=1877 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:29:11.830000 audit[2006]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.830000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd40c3f410 a2=0 a3=0 items=0 ppid=1877 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:29:11.849000 audit[2008]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.849000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2c7ad830 a2=0 a3=0 items=0 ppid=1877 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:29:11.866000 audit[2010]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.866000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffce988ff70 a2=0 a3=0 items=0 ppid=1877 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:29:11.915000 audit[2012]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.915000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff2b314b20 a2=0 a3=0 items=0 ppid=1877 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:29:11.959000 audit[2014]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:11.959000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffec50c9140 a2=0 a3=0 items=0 ppid=1877 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:11.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:29:12.056000 audit[2016]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.056000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd1774e1f0 a2=0 a3=0 items=0 ppid=1877 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 22 00:29:12.128000 audit[2018]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.128000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb136be30 a2=0 a3=0 items=0 ppid=1877 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:29:12.159000 audit[2020]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.159000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe614d99e0 a2=0 a3=0 items=0 ppid=1877 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:29:12.203000 audit[2022]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.203000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd52dfa430 a2=0 a3=0 items=0 ppid=1877 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:29:12.226000 audit[2024]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.226000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdbc666b80 a2=0 a3=0 items=0 ppid=1877 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:29:12.312000 audit[2029]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.312000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1a8223b0 a2=0 a3=0 items=0 ppid=1877 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:29:12.331000 audit[2031]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.331000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd288b2dc0 a2=0 a3=0 items=0 ppid=1877 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:29:12.357000 audit[2033]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.357000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdca779bf0 a2=0 a3=0 items=0 ppid=1877 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:29:12.370000 audit[2035]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.370000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd54ca81f0 a2=0 a3=0 items=0 ppid=1877 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:29:12.384000 audit[2037]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.384000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd1f1d2020 a2=0 a3=0 items=0 ppid=1877 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:29:12.398000 audit[2039]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:29:12.398000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe6c18bdd0 a2=0 a3=0 items=0 ppid=1877 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:29:12.502000 audit[2043]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.502000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd6515da60 a2=0 a3=0 items=0 ppid=1877 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.502000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 22 00:29:12.521000 audit[2045]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.521000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdce94b4c0 a2=0 a3=0 items=0 ppid=1877 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 22 00:29:12.577000 audit[2053]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.577000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffda834de00 a2=0 a3=0 items=0 ppid=1877 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.577000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 22 00:29:12.620000 audit[2059]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.620000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff6b57fc90 a2=0 a3=0 items=0 ppid=1877 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 22 00:29:12.633000 audit[2061]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.633000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc668d96d0 a2=0 a3=0 items=0 ppid=1877 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 22 00:29:12.651000 audit[2063]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.651000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff386a95c0 a2=0 a3=0 items=0 ppid=1877 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 22 00:29:12.664000 audit[2065]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.664000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe4eff4b50 a2=0 a3=0 items=0 ppid=1877 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:29:12.676000 audit[2067]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:29:12.676000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc640b4fd0 a2=0 a3=0 items=0 ppid=1877 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:29:12.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 22 00:29:12.679716 systemd-networkd[1529]: docker0: Link UP Jan 22 00:29:12.705662 dockerd[1877]: time="2026-01-22T00:29:12.704398185Z" level=info msg="Loading containers: done." Jan 22 00:29:12.832969 dockerd[1877]: time="2026-01-22T00:29:12.832615768Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 22 00:29:12.833680 dockerd[1877]: time="2026-01-22T00:29:12.833529282Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 22 00:29:12.834394 dockerd[1877]: time="2026-01-22T00:29:12.834205447Z" level=info msg="Initializing buildkit" Jan 22 00:29:12.970911 dockerd[1877]: time="2026-01-22T00:29:12.970695418Z" level=info msg="Completed buildkit initialization" Jan 22 00:29:13.005032 dockerd[1877]: time="2026-01-22T00:29:13.004679662Z" level=info msg="Daemon has completed initialization" Jan 22 00:29:13.006599 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 22 00:29:13.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:13.008071 dockerd[1877]: time="2026-01-22T00:29:13.005728391Z" level=info msg="API listen on /run/docker.sock" Jan 22 00:29:16.288580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 22 00:29:16.295127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:29:16.519985 containerd[1635]: time="2026-01-22T00:29:16.519549959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 22 00:29:16.911080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:29:16.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:16.946665 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 22 00:29:16.947064 kernel: audit: type=1130 audit(1769041756.910:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:16.963328 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:29:17.226147 kubelet[2118]: E0122 00:29:17.222777 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:29:17.231572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:29:17.232696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:29:17.235296 systemd[1]: kubelet.service: Consumed 589ms CPU time, 109M memory peak. Jan 22 00:29:17.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:17.265405 kernel: audit: type=1131 audit(1769041757.234:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:18.185572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134652170.mount: Deactivated successfully. Jan 22 00:29:20.331937 update_engine[1610]: I20260122 00:29:20.329679 1610 update_attempter.cc:509] Updating boot flags... Jan 22 00:29:27.382207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 22 00:29:27.397036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:29:29.113278 containerd[1635]: time="2026-01-22T00:29:29.112562246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:29.117603 containerd[1635]: time="2026-01-22T00:29:29.116685368Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=26225222" Jan 22 00:29:29.138735 containerd[1635]: time="2026-01-22T00:29:29.138532036Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:29.176312 containerd[1635]: time="2026-01-22T00:29:29.176205280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:29.182563 containerd[1635]: time="2026-01-22T00:29:29.182255606Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 12.661020519s" Jan 22 00:29:29.182771 containerd[1635]: time="2026-01-22T00:29:29.182663428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 22 00:29:29.206152 containerd[1635]: time="2026-01-22T00:29:29.205482280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 22 00:29:30.369542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:29:30.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:30.404318 kernel: audit: type=1130 audit(1769041770.369:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:30.491591 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:29:31.945754 kubelet[2211]: E0122 00:29:31.941464 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:29:31.965662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:29:31.966295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:29:31.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:31.968120 systemd[1]: kubelet.service: Consumed 2.981s CPU time, 110.2M memory peak. Jan 22 00:29:32.003270 kernel: audit: type=1131 audit(1769041771.966:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:37.741616 containerd[1635]: time="2026-01-22T00:29:37.740474975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:37.747188 containerd[1635]: time="2026-01-22T00:29:37.747144803Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 22 00:29:37.752547 containerd[1635]: time="2026-01-22T00:29:37.750475154Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:37.775037 containerd[1635]: time="2026-01-22T00:29:37.771361532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:37.779710 containerd[1635]: time="2026-01-22T00:29:37.778544501Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.572949765s" Jan 22 00:29:37.779710 containerd[1635]: time="2026-01-22T00:29:37.778713995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 22 00:29:37.793114 containerd[1635]: time="2026-01-22T00:29:37.792276585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 22 00:29:42.075007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 22 00:29:42.105418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:29:45.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:45.249452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:29:45.290136 kernel: audit: type=1130 audit(1769041785.249:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:45.308146 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:29:46.134709 kubelet[2236]: E0122 00:29:46.133565 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:29:46.203375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:29:46.204684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:29:46.264572 systemd[1]: kubelet.service: Consumed 1.952s CPU time, 111.4M memory peak. Jan 22 00:29:46.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:46.302261 kernel: audit: type=1131 audit(1769041786.263:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:47.141426 containerd[1635]: time="2026-01-22T00:29:47.135752510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:47.141426 containerd[1635]: time="2026-01-22T00:29:47.139382683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 22 00:29:47.166683 containerd[1635]: time="2026-01-22T00:29:47.165977711Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:47.177140 containerd[1635]: time="2026-01-22T00:29:47.175552003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:47.177140 containerd[1635]: time="2026-01-22T00:29:47.177004063Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 9.384665172s" Jan 22 00:29:47.185220 containerd[1635]: time="2026-01-22T00:29:47.177157228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 22 00:29:47.215725 containerd[1635]: time="2026-01-22T00:29:47.208717170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 22 00:29:51.765665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162267810.mount: Deactivated successfully. Jan 22 00:29:55.095404 containerd[1635]: time="2026-01-22T00:29:55.094710585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:55.100664 containerd[1635]: time="2026-01-22T00:29:55.097522857Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25962213" Jan 22 00:29:55.101995 containerd[1635]: time="2026-01-22T00:29:55.101752719Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:55.107272 containerd[1635]: time="2026-01-22T00:29:55.107089988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:29:55.107643 containerd[1635]: time="2026-01-22T00:29:55.107449069Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 7.898235846s" Jan 22 00:29:55.107643 containerd[1635]: time="2026-01-22T00:29:55.107608967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 22 00:29:55.112983 containerd[1635]: time="2026-01-22T00:29:55.112701080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 22 00:29:56.290377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 22 00:29:56.302634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:29:57.591394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538045966.mount: Deactivated successfully. Jan 22 00:29:57.914433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:29:57.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:57.976227 kernel: audit: type=1130 audit(1769041797.923:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:29:57.987602 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:29:58.606167 kubelet[2269]: E0122 00:29:58.604354 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:29:58.690556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:29:58.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:29:58.702702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:29:58.789417 systemd[1]: kubelet.service: Consumed 1.345s CPU time, 109.9M memory peak. Jan 22 00:29:58.793315 kernel: audit: type=1131 audit(1769041798.760:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:08.793700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 22 00:30:08.810579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:10.197286 containerd[1635]: time="2026-01-22T00:30:10.193070381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:10.200784 containerd[1635]: time="2026-01-22T00:30:10.200243456Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22248103" Jan 22 00:30:10.207076 containerd[1635]: time="2026-01-22T00:30:10.206744899Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:10.217229 containerd[1635]: time="2026-01-22T00:30:10.215776067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:10.281200 containerd[1635]: time="2026-01-22T00:30:10.280156979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 15.166578425s" Jan 22 00:30:10.281200 containerd[1635]: time="2026-01-22T00:30:10.280314214Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 22 00:30:10.361399 containerd[1635]: time="2026-01-22T00:30:10.358498382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 22 00:30:11.618638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:11.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:11.652091 kernel: audit: type=1130 audit(1769041811.618:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:11.668363 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:30:11.913506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781986456.mount: Deactivated successfully. Jan 22 00:30:11.961117 containerd[1635]: time="2026-01-22T00:30:11.960590154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:11.966109 containerd[1635]: time="2026-01-22T00:30:11.965642664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=501" Jan 22 00:30:11.968983 containerd[1635]: time="2026-01-22T00:30:11.968111742Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:11.975327 containerd[1635]: time="2026-01-22T00:30:11.975292998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:11.977546 containerd[1635]: time="2026-01-22T00:30:11.976729627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.614347217s" Jan 22 00:30:11.977683 containerd[1635]: time="2026-01-22T00:30:11.977661827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 22 00:30:11.994559 containerd[1635]: time="2026-01-22T00:30:11.994418846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 22 00:30:12.306407 kubelet[2330]: E0122 00:30:12.287982 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:30:12.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:12.334767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:30:12.335497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:30:12.354760 systemd[1]: kubelet.service: Consumed 2.046s CPU time, 110.6M memory peak. Jan 22 00:30:12.390388 kernel: audit: type=1131 audit(1769041812.353:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:14.307271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553723152.mount: Deactivated successfully. Jan 22 00:30:22.562135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 22 00:30:22.585058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:25.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:25.968672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:26.013057 kernel: audit: type=1130 audit(1769041825.968:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:26.041215 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:30:26.959386 kubelet[2403]: E0122 00:30:26.957523 2403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:30:26.996148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:30:26.999705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:30:27.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:27.006727 systemd[1]: kubelet.service: Consumed 2.040s CPU time, 110.5M memory peak. Jan 22 00:30:27.065155 kernel: audit: type=1131 audit(1769041827.005:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:37.107716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 22 00:30:37.137364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:39.583696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:39.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:39.615034 kernel: audit: type=1130 audit(1769041839.583:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:39.642268 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:30:40.112905 containerd[1635]: time="2026-01-22T00:30:40.112270107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:40.122344 containerd[1635]: time="2026-01-22T00:30:40.122015968Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72999887" Jan 22 00:30:40.133117 containerd[1635]: time="2026-01-22T00:30:40.133043332Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:40.169733 containerd[1635]: time="2026-01-22T00:30:40.169595547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:30:40.178205 containerd[1635]: time="2026-01-22T00:30:40.177295844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 28.18275027s" Jan 22 00:30:40.178205 containerd[1635]: time="2026-01-22T00:30:40.177334316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 22 00:30:40.269660 kubelet[2425]: E0122 00:30:40.269584 2425 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:30:40.285083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:30:40.285691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:30:40.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:40.287400 systemd[1]: kubelet.service: Consumed 1.764s CPU time, 110M memory peak. Jan 22 00:30:40.315599 kernel: audit: type=1131 audit(1769041840.287:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:50.555559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 22 00:30:50.567520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:51.285522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:51.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:51.312931 kernel: audit: type=1130 audit(1769041851.286:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:51.321624 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:30:51.473213 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:51.477174 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 00:30:51.477739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:51.478227 systemd[1]: kubelet.service: Consumed 492ms CPU time, 108.1M memory peak. Jan 22 00:30:51.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:51.490635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:51.506334 kernel: audit: type=1131 audit(1769041851.477:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:51.581563 systemd[1]: Reload requested from client PID 2479 ('systemctl') (unit session-9.scope)... Jan 22 00:30:51.581730 systemd[1]: Reloading... Jan 22 00:30:51.819458 zram_generator::config[2525]: No configuration found. Jan 22 00:30:52.453529 systemd[1]: Reloading finished in 870 ms. Jan 22 00:30:52.510000 audit: BPF prog-id=61 op=LOAD Jan 22 00:30:52.524112 kernel: audit: type=1334 audit(1769041852.510:291): prog-id=61 op=LOAD Jan 22 00:30:52.526907 kernel: audit: type=1334 audit(1769041852.511:292): prog-id=56 op=UNLOAD Jan 22 00:30:52.511000 audit: BPF prog-id=56 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=62 op=LOAD Jan 22 00:30:52.534772 kernel: audit: type=1334 audit(1769041852.513:293): prog-id=62 op=LOAD Jan 22 00:30:52.535353 kernel: audit: type=1334 audit(1769041852.513:294): prog-id=48 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=48 op=UNLOAD Jan 22 00:30:52.546068 kernel: audit: type=1334 audit(1769041852.513:295): prog-id=63 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=63 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=64 op=LOAD Jan 22 00:30:52.558197 kernel: audit: type=1334 audit(1769041852.513:296): prog-id=64 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=49 op=UNLOAD Jan 22 00:30:52.568599 kernel: audit: type=1334 audit(1769041852.513:297): prog-id=49 op=UNLOAD Jan 22 00:30:52.573206 kernel: audit: type=1334 audit(1769041852.513:298): prog-id=50 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=50 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=65 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=44 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=66 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=45 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=67 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=68 op=LOAD Jan 22 00:30:52.513000 audit: BPF prog-id=46 op=UNLOAD Jan 22 00:30:52.513000 audit: BPF prog-id=47 op=UNLOAD Jan 22 00:30:52.521000 audit: BPF prog-id=69 op=LOAD Jan 22 00:30:52.522000 audit: BPF prog-id=58 op=UNLOAD Jan 22 00:30:52.522000 audit: BPF prog-id=70 op=LOAD Jan 22 00:30:52.522000 audit: BPF prog-id=71 op=LOAD Jan 22 00:30:52.522000 audit: BPF prog-id=59 op=UNLOAD Jan 22 00:30:52.522000 audit: BPF prog-id=60 op=UNLOAD Jan 22 00:30:52.585000 audit: BPF prog-id=72 op=LOAD Jan 22 00:30:52.589000 audit: BPF prog-id=41 op=UNLOAD Jan 22 00:30:52.590000 audit: BPF prog-id=73 op=LOAD Jan 22 00:30:52.592000 audit: BPF prog-id=74 op=LOAD Jan 22 00:30:52.592000 audit: BPF prog-id=42 op=UNLOAD Jan 22 00:30:52.592000 audit: BPF prog-id=43 op=UNLOAD Jan 22 00:30:52.595000 audit: BPF prog-id=75 op=LOAD Jan 22 00:30:52.595000 audit: BPF prog-id=76 op=LOAD Jan 22 00:30:52.595000 audit: BPF prog-id=54 op=UNLOAD Jan 22 00:30:52.595000 audit: BPF prog-id=55 op=UNLOAD Jan 22 00:30:52.599000 audit: BPF prog-id=77 op=LOAD Jan 22 00:30:52.600000 audit: BPF prog-id=57 op=UNLOAD Jan 22 00:30:52.608000 audit: BPF prog-id=78 op=LOAD Jan 22 00:30:52.608000 audit: BPF prog-id=51 op=UNLOAD Jan 22 00:30:52.608000 audit: BPF prog-id=79 op=LOAD Jan 22 00:30:52.608000 audit: BPF prog-id=80 op=LOAD Jan 22 00:30:52.608000 audit: BPF prog-id=52 op=UNLOAD Jan 22 00:30:52.608000 audit: BPF prog-id=53 op=UNLOAD Jan 22 00:30:52.714564 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 22 00:30:52.716330 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 22 00:30:52.719337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:52.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:30:52.722647 systemd[1]: kubelet.service: Consumed 297ms CPU time, 98.3M memory peak. Jan 22 00:30:52.732602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:30:53.380325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:30:53.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:30:53.400495 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:30:53.738014 kubelet[2573]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:30:53.738014 kubelet[2573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:30:53.739210 kubelet[2573]: I0122 00:30:53.738936 2573 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:30:54.195917 kubelet[2573]: I0122 00:30:54.194139 2573 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 22 00:30:54.195917 kubelet[2573]: I0122 00:30:54.195337 2573 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:30:54.195917 kubelet[2573]: I0122 00:30:54.196003 2573 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 22 00:30:54.195917 kubelet[2573]: I0122 00:30:54.196026 2573 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:30:54.199908 kubelet[2573]: I0122 00:30:54.198906 2573 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 00:30:54.357456 kubelet[2573]: I0122 00:30:54.357258 2573 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:30:54.401102 kubelet[2573]: E0122 00:30:54.400437 2573 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 00:30:54.410364 kubelet[2573]: I0122 00:30:54.410185 2573 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:30:54.435414 kubelet[2573]: I0122 00:30:54.434777 2573 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 22 00:30:54.436150 kubelet[2573]: I0122 00:30:54.435997 2573 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:30:54.436650 kubelet[2573]: I0122 00:30:54.436095 2573 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:30:54.436650 kubelet[2573]: I0122 00:30:54.436646 2573 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:30:54.438066 kubelet[2573]: I0122 00:30:54.436668 2573 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 00:30:54.438066 kubelet[2573]: I0122 00:30:54.437156 2573 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 22 00:30:54.446377 kubelet[2573]: I0122 00:30:54.443664 2573 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:30:54.447450 kubelet[2573]: I0122 00:30:54.447321 2573 kubelet.go:475] "Attempting to sync node with API server" Jan 22 00:30:54.447450 kubelet[2573]: I0122 00:30:54.447402 2573 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:30:54.447450 kubelet[2573]: I0122 00:30:54.447452 2573 kubelet.go:387] "Adding apiserver pod source" Jan 22 00:30:54.448057 kubelet[2573]: I0122 00:30:54.447925 2573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:30:54.451787 kubelet[2573]: E0122 00:30:54.451529 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 00:30:54.452413 kubelet[2573]: E0122 00:30:54.452304 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 00:30:54.462626 kubelet[2573]: I0122 00:30:54.462525 2573 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:30:54.464026 kubelet[2573]: I0122 00:30:54.463934 2573 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 00:30:54.464026 kubelet[2573]: I0122 00:30:54.463975 2573 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 22 00:30:54.464372 kubelet[2573]: W0122 00:30:54.464322 2573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 22 00:30:54.476147 kubelet[2573]: I0122 00:30:54.476031 2573 server.go:1262] "Started kubelet" Jan 22 00:30:54.478262 kubelet[2573]: I0122 00:30:54.478140 2573 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:30:54.478679 kubelet[2573]: I0122 00:30:54.478327 2573 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 22 00:30:54.482911 kubelet[2573]: I0122 00:30:54.482629 2573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:30:54.482911 kubelet[2573]: I0122 00:30:54.482760 2573 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:30:54.483417 kubelet[2573]: I0122 00:30:54.483270 2573 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:30:54.486656 kubelet[2573]: I0122 00:30:54.486493 2573 server.go:310] "Adding debug handlers to kubelet server" Jan 22 00:30:54.492124 kubelet[2573]: E0122 00:30:54.485094 2573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ce631a0d1a3b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,LastTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 22 00:30:54.492124 kubelet[2573]: I0122 00:30:54.491283 2573 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 22 00:30:54.492443 kubelet[2573]: E0122 00:30:54.492422 2573 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 22 00:30:54.492522 kubelet[2573]: I0122 00:30:54.492502 2573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:30:54.495983 kubelet[2573]: I0122 00:30:54.495670 2573 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 00:30:54.496048 kubelet[2573]: I0122 00:30:54.496031 2573 reconciler.go:29] "Reconciler: start to sync state" Jan 22 00:30:54.499309 kubelet[2573]: E0122 00:30:54.499266 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Jan 22 00:30:54.500661 kubelet[2573]: E0122 00:30:54.500347 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 00:30:54.503404 kubelet[2573]: I0122 00:30:54.503374 2573 factory.go:223] Registration of the systemd container factory successfully Jan 22 00:30:54.504166 kubelet[2573]: I0122 00:30:54.503665 2573 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:30:54.504166 kubelet[2573]: E0122 00:30:54.504134 2573 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:30:54.541566 kubelet[2573]: I0122 00:30:54.540934 2573 factory.go:223] Registration of the containerd container factory successfully Jan 22 00:30:54.561000 audit[2594]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.561000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3cf48d10 a2=0 a3=0 items=0 ppid=2573 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:30:54.571000 audit[2596]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.571000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe10bfd9f0 a2=0 a3=0 items=0 ppid=2573 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:30:54.580000 audit[2598]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.580000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd278e9bf0 a2=0 a3=0 items=0 ppid=2573 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:30:54.593000 audit[2601]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.593000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda6fcf760 a2=0 a3=0 items=0 ppid=2573 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:30:54.594647 kubelet[2573]: E0122 00:30:54.594605 2573 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 22 00:30:54.603349 kubelet[2573]: I0122 00:30:54.603258 2573 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:30:54.603349 kubelet[2573]: I0122 00:30:54.603317 2573 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:30:54.603544 kubelet[2573]: I0122 00:30:54.603426 2573 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:30:54.611458 kubelet[2573]: I0122 00:30:54.611281 2573 policy_none.go:49] "None policy: Start" Jan 22 00:30:54.611458 kubelet[2573]: I0122 00:30:54.611409 2573 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 22 00:30:54.611631 kubelet[2573]: I0122 00:30:54.611473 2573 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 22 00:30:54.617039 kubelet[2573]: I0122 00:30:54.616630 2573 policy_none.go:47] "Start" Jan 22 00:30:54.620000 audit[2605]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.620000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe53b4c890 a2=0 a3=0 items=0 ppid=2573 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 22 00:30:54.625053 kubelet[2573]: I0122 00:30:54.623078 2573 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 22 00:30:54.626000 audit[2607]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:30:54.626000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff0c6b9690 a2=0 a3=0 items=0 ppid=2573 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.626000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:30:54.629076 kubelet[2573]: I0122 00:30:54.629009 2573 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 22 00:30:54.629166 kubelet[2573]: I0122 00:30:54.629134 2573 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 22 00:30:54.629290 kubelet[2573]: I0122 00:30:54.629226 2573 kubelet.go:2427] "Starting kubelet main sync loop" Jan 22 00:30:54.629423 kubelet[2573]: E0122 00:30:54.629345 2573 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:30:54.632479 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 22 00:30:54.633000 audit[2608]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.633000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2b183590 a2=0 a3=0 items=0 ppid=2573 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:30:54.633000 audit[2609]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:30:54.633000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9f28da20 a2=0 a3=0 items=0 ppid=2573 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:30:54.638416 kubelet[2573]: E0122 00:30:54.635517 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 00:30:54.640000 audit[2610]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:30:54.640000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8ddb8190 a2=0 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:30:54.645000 audit[2611]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.645000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77f724f0 a2=0 a3=0 items=0 ppid=2573 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:30:54.648000 audit[2612]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:30:54.648000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdaa0fc3c0 a2=0 a3=0 items=0 ppid=2573 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:30:54.650595 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 22 00:30:54.652000 audit[2613]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:30:54.652000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd4a4a3e0 a2=0 a3=0 items=0 ppid=2573 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:54.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:30:54.659766 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 22 00:30:54.679266 kubelet[2573]: E0122 00:30:54.678957 2573 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 00:30:54.679452 kubelet[2573]: I0122 00:30:54.679429 2573 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:30:54.679540 kubelet[2573]: I0122 00:30:54.679453 2573 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:30:54.680639 kubelet[2573]: I0122 00:30:54.680427 2573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:30:54.683565 kubelet[2573]: E0122 00:30:54.683058 2573 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:30:54.683565 kubelet[2573]: E0122 00:30:54.683161 2573 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 22 00:30:54.703219 kubelet[2573]: E0122 00:30:54.702771 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Jan 22 00:30:54.799784 kubelet[2573]: I0122 00:30:54.799092 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 22 00:30:54.799784 kubelet[2573]: I0122 00:30:54.799289 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:30:54.799784 kubelet[2573]: I0122 00:30:54.799320 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:30:54.799784 kubelet[2573]: I0122 00:30:54.799344 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:30:54.799784 kubelet[2573]: I0122 00:30:54.799366 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:30:54.803301 kubelet[2573]: I0122 00:30:54.799388 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:30:54.803301 kubelet[2573]: I0122 00:30:54.799507 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:30:54.803301 kubelet[2573]: I0122 00:30:54.799528 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:30:54.803301 kubelet[2573]: I0122 00:30:54.799547 2573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:30:54.806095 kubelet[2573]: I0122 00:30:54.805987 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:30:54.808143 kubelet[2573]: E0122 00:30:54.807778 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 22 00:30:54.819777 systemd[1]: Created slice kubepods-burstable-poda4d4d0e17fd226e378560c19b2cb09c3.slice - libcontainer container kubepods-burstable-poda4d4d0e17fd226e378560c19b2cb09c3.slice. Jan 22 00:30:54.883449 kubelet[2573]: E0122 00:30:54.883030 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:54.900964 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 22 00:30:54.957521 kubelet[2573]: E0122 00:30:54.956486 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:54.998479 kubelet[2573]: E0122 00:30:54.998210 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:55.004156 containerd[1635]: time="2026-01-22T00:30:55.004005437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 22 00:30:55.008582 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 22 00:30:55.049901 kubelet[2573]: E0122 00:30:55.049283 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:55.049901 kubelet[2573]: I0122 00:30:55.049561 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:30:55.052640 kubelet[2573]: E0122 00:30:55.050467 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 22 00:30:55.055354 kubelet[2573]: E0122 00:30:55.054965 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:55.056602 containerd[1635]: time="2026-01-22T00:30:55.056394361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 22 00:30:55.106922 kubelet[2573]: E0122 00:30:55.105588 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Jan 22 00:30:55.207004 kubelet[2573]: E0122 00:30:55.187782 2573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ce631a0d1a3b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,LastTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 22 00:30:55.218546 kubelet[2573]: E0122 00:30:55.216563 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:55.220390 containerd[1635]: time="2026-01-22T00:30:55.220127300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a4d4d0e17fd226e378560c19b2cb09c3,Namespace:kube-system,Attempt:0,}" Jan 22 00:30:55.460697 kubelet[2573]: I0122 00:30:55.460260 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:30:55.464237 kubelet[2573]: E0122 00:30:55.462129 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 22 00:30:55.540208 kubelet[2573]: E0122 00:30:55.534637 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 00:30:55.563150 kubelet[2573]: E0122 00:30:55.560629 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 00:30:55.715247 kubelet[2573]: E0122 00:30:55.714554 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 00:30:55.733266 kubelet[2573]: E0122 00:30:55.732318 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 00:30:55.918413 kubelet[2573]: E0122 00:30:55.908657 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" Jan 22 00:30:55.991673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596246371.mount: Deactivated successfully. Jan 22 00:30:56.024162 containerd[1635]: time="2026-01-22T00:30:56.023986625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:30:56.034419 containerd[1635]: time="2026-01-22T00:30:56.034159779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:30:56.043383 containerd[1635]: time="2026-01-22T00:30:56.043275289Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:30:56.054187 containerd[1635]: time="2026-01-22T00:30:56.053604116Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:30:56.058258 containerd[1635]: time="2026-01-22T00:30:56.058050844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:30:56.065130 containerd[1635]: time="2026-01-22T00:30:56.063131456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:30:56.068702 containerd[1635]: time="2026-01-22T00:30:56.068562284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:30:56.071780 containerd[1635]: time="2026-01-22T00:30:56.071483414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:30:56.073192 containerd[1635]: time="2026-01-22T00:30:56.072643989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.006324523s" Jan 22 00:30:56.082216 containerd[1635]: time="2026-01-22T00:30:56.081973378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 853.423013ms" Jan 22 00:30:56.093447 containerd[1635]: time="2026-01-22T00:30:56.093005725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.064037335s" Jan 22 00:30:56.268556 kubelet[2573]: I0122 00:30:56.267560 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:30:56.277314 kubelet[2573]: E0122 00:30:56.270519 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 22 00:30:56.348432 containerd[1635]: time="2026-01-22T00:30:56.347055682Z" level=info msg="connecting to shim 4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745" address="unix:///run/containerd/s/574fca66a4503526f435e292678de307f2dd69372229f7b60b82128b23de442c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:30:56.348432 containerd[1635]: time="2026-01-22T00:30:56.350590177Z" level=info msg="connecting to shim 5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e" address="unix:///run/containerd/s/b08b133249d014b4e9ab2ec6f2ee3435b04c0a95a607f1dd4299953e48ceb702" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:30:56.359194 containerd[1635]: time="2026-01-22T00:30:56.359137840Z" level=info msg="connecting to shim 2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7" address="unix:///run/containerd/s/e88f5396e7c6e1de237aecec03aa54933ea4d0eb76a4f7678093913a488605b0" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:30:56.790443 kubelet[2573]: E0122 00:30:56.790194 2573 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 00:30:56.801140 systemd[1]: Started cri-containerd-2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7.scope - libcontainer container 2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7. Jan 22 00:30:56.849466 systemd[1]: Started cri-containerd-4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745.scope - libcontainer container 4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745. Jan 22 00:30:56.896217 systemd[1]: Started cri-containerd-5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e.scope - libcontainer container 5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e. Jan 22 00:30:56.999000 audit: BPF prog-id=81 op=LOAD Jan 22 00:30:57.005297 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 22 00:30:57.007175 kernel: audit: type=1334 audit(1769041856.999:345): prog-id=81 op=LOAD Jan 22 00:30:57.003000 audit: BPF prog-id=82 op=LOAD Jan 22 00:30:57.003000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.051973 kernel: audit: type=1334 audit(1769041857.003:346): prog-id=82 op=LOAD Jan 22 00:30:57.052112 kernel: audit: type=1300 audit(1769041857.003:346): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.096910 kernel: audit: type=1327 audit(1769041857.003:346): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.003000 audit: BPF prog-id=82 op=UNLOAD Jan 22 00:30:57.104100 kernel: audit: type=1334 audit(1769041857.003:347): prog-id=82 op=UNLOAD Jan 22 00:30:57.003000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.200019 kernel: audit: type=1300 audit(1769041857.003:347): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.200246 kernel: audit: type=1327 audit(1769041857.003:347): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.005000 audit: BPF prog-id=83 op=LOAD Jan 22 00:30:57.005000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.235539 kernel: audit: type=1334 audit(1769041857.005:348): prog-id=83 op=LOAD Jan 22 00:30:57.235702 kernel: audit: type=1300 audit(1769041857.005:348): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.235906 kernel: audit: type=1327 audit(1769041857.005:348): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.260531 kubelet[2573]: E0122 00:30:57.255659 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 00:30:57.006000 audit: BPF prog-id=84 op=LOAD Jan 22 00:30:57.006000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.007000 audit: BPF prog-id=84 op=UNLOAD Jan 22 00:30:57.007000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.008000 audit: BPF prog-id=83 op=UNLOAD Jan 22 00:30:57.008000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.008000 audit: BPF prog-id=85 op=LOAD Jan 22 00:30:57.008000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2648 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466303230616361396562626438336437623232626432316563663663 Jan 22 00:30:57.023000 audit: BPF prog-id=86 op=LOAD Jan 22 00:30:57.028000 audit: BPF prog-id=87 op=LOAD Jan 22 00:30:57.028000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.029000 audit: BPF prog-id=87 op=UNLOAD Jan 22 00:30:57.029000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.029000 audit: BPF prog-id=88 op=LOAD Jan 22 00:30:57.029000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.029000 audit: BPF prog-id=89 op=LOAD Jan 22 00:30:57.029000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.030000 audit: BPF prog-id=89 op=UNLOAD Jan 22 00:30:57.030000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.030000 audit: BPF prog-id=88 op=UNLOAD Jan 22 00:30:57.030000 audit[2686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.030000 audit: BPF prog-id=90 op=LOAD Jan 22 00:30:57.030000 audit[2686]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2642 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564306264366334323665363134336262306638326438636536623633 Jan 22 00:30:57.078000 audit: BPF prog-id=91 op=LOAD Jan 22 00:30:57.079000 audit: BPF prog-id=92 op=LOAD Jan 22 00:30:57.079000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.079000 audit: BPF prog-id=92 op=UNLOAD Jan 22 00:30:57.079000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.080000 audit: BPF prog-id=93 op=LOAD Jan 22 00:30:57.080000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.276000 audit: BPF prog-id=94 op=LOAD Jan 22 00:30:57.276000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.276000 audit: BPF prog-id=94 op=UNLOAD Jan 22 00:30:57.276000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.276000 audit: BPF prog-id=93 op=UNLOAD Jan 22 00:30:57.276000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.276000 audit: BPF prog-id=95 op=LOAD Jan 22 00:30:57.276000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2644 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:57.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373531653836663261383562626462383562383533626632346532 Jan 22 00:30:57.568293 kubelet[2573]: E0122 00:30:57.568135 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="3.2s" Jan 22 00:30:57.573595 kubelet[2573]: E0122 00:30:57.573404 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 00:30:57.603958 containerd[1635]: time="2026-01-22T00:30:57.603326834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a4d4d0e17fd226e378560c19b2cb09c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e\"" Jan 22 00:30:57.610863 kubelet[2573]: E0122 00:30:57.610707 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:57.634075 containerd[1635]: time="2026-01-22T00:30:57.632355286Z" level=info msg="CreateContainer within sandbox \"5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 22 00:30:57.681264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265570238.mount: Deactivated successfully. Jan 22 00:30:57.777721 containerd[1635]: time="2026-01-22T00:30:57.777339410Z" level=info msg="Container 5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:30:57.817225 containerd[1635]: time="2026-01-22T00:30:57.804919208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745\"" Jan 22 00:30:57.824241 kubelet[2573]: E0122 00:30:57.822527 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:57.834435 containerd[1635]: time="2026-01-22T00:30:57.834289803Z" level=info msg="CreateContainer within sandbox \"5d0bd6c426e6143bb0f82d8ce6b6385b4c9ae15d254c4bd1d2a179df479c719e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b\"" Jan 22 00:30:57.835613 containerd[1635]: time="2026-01-22T00:30:57.835514614Z" level=info msg="StartContainer for \"5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b\"" Jan 22 00:30:57.836362 containerd[1635]: time="2026-01-22T00:30:57.836331187Z" level=info msg="CreateContainer within sandbox \"4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 22 00:30:57.838574 containerd[1635]: time="2026-01-22T00:30:57.838443375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7\"" Jan 22 00:30:57.838717 containerd[1635]: time="2026-01-22T00:30:57.838458440Z" level=info msg="connecting to shim 5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b" address="unix:///run/containerd/s/b08b133249d014b4e9ab2ec6f2ee3435b04c0a95a607f1dd4299953e48ceb702" protocol=ttrpc version=3 Jan 22 00:30:57.842324 kubelet[2573]: E0122 00:30:57.842228 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:57.862321 containerd[1635]: time="2026-01-22T00:30:57.862155000Z" level=info msg="CreateContainer within sandbox \"2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 22 00:30:57.870705 containerd[1635]: time="2026-01-22T00:30:57.870392818Z" level=info msg="Container 1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:30:57.873449 kubelet[2573]: I0122 00:30:57.873167 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:30:57.874024 kubelet[2573]: E0122 00:30:57.873695 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jan 22 00:30:57.932577 containerd[1635]: time="2026-01-22T00:30:57.932017556Z" level=info msg="Container 91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:30:57.941003 containerd[1635]: time="2026-01-22T00:30:57.940604751Z" level=info msg="CreateContainer within sandbox \"4f020aca9ebbd83d7b22bd21ecf6c2dee4cbf150fdd021d54c5473fd25161745\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048\"" Jan 22 00:30:57.950471 containerd[1635]: time="2026-01-22T00:30:57.950361532Z" level=info msg="StartContainer for \"1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048\"" Jan 22 00:30:57.954074 containerd[1635]: time="2026-01-22T00:30:57.953722878Z" level=info msg="connecting to shim 1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048" address="unix:///run/containerd/s/574fca66a4503526f435e292678de307f2dd69372229f7b60b82128b23de442c" protocol=ttrpc version=3 Jan 22 00:30:57.962241 systemd[1]: Started cri-containerd-5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b.scope - libcontainer container 5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b. Jan 22 00:30:57.964147 containerd[1635]: time="2026-01-22T00:30:57.964084050Z" level=info msg="CreateContainer within sandbox \"2e751e86f2a85bbdb85b853bf24e2ec3b95be4030d52d1e00d1eec64e80dd5b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08\"" Jan 22 00:30:57.968208 containerd[1635]: time="2026-01-22T00:30:57.968173900Z" level=info msg="StartContainer for \"91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08\"" Jan 22 00:30:57.972434 containerd[1635]: time="2026-01-22T00:30:57.972336224Z" level=info msg="connecting to shim 91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08" address="unix:///run/containerd/s/e88f5396e7c6e1de237aecec03aa54933ea4d0eb76a4f7678093913a488605b0" protocol=ttrpc version=3 Jan 22 00:30:57.995713 kubelet[2573]: E0122 00:30:57.993714 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 00:30:58.011000 audit: BPF prog-id=96 op=LOAD Jan 22 00:30:58.067000 audit: BPF prog-id=97 op=LOAD Jan 22 00:30:58.067000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.067000 audit: BPF prog-id=97 op=UNLOAD Jan 22 00:30:58.067000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.071000 audit: BPF prog-id=98 op=LOAD Jan 22 00:30:58.071000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.071000 audit: BPF prog-id=99 op=LOAD Jan 22 00:30:58.071000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.071000 audit: BPF prog-id=99 op=UNLOAD Jan 22 00:30:58.071000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.072000 audit: BPF prog-id=98 op=UNLOAD Jan 22 00:30:58.072000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.072000 audit: BPF prog-id=100 op=LOAD Jan 22 00:30:58.072000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2642 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566323637366131656338653863343137336161313731316464653961 Jan 22 00:30:58.103380 kubelet[2573]: E0122 00:30:58.103249 2573 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 00:30:58.133553 systemd[1]: Started cri-containerd-1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048.scope - libcontainer container 1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048. Jan 22 00:30:58.214926 systemd[1]: Started cri-containerd-91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08.scope - libcontainer container 91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08. Jan 22 00:30:58.262000 audit: BPF prog-id=101 op=LOAD Jan 22 00:30:58.264000 audit: BPF prog-id=102 op=LOAD Jan 22 00:30:58.264000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.264000 audit: BPF prog-id=102 op=UNLOAD Jan 22 00:30:58.264000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.264000 audit: BPF prog-id=103 op=LOAD Jan 22 00:30:58.264000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.265000 audit: BPF prog-id=104 op=LOAD Jan 22 00:30:58.265000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.265000 audit: BPF prog-id=104 op=UNLOAD Jan 22 00:30:58.265000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.265000 audit: BPF prog-id=103 op=UNLOAD Jan 22 00:30:58.265000 audit[2780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.265000 audit: BPF prog-id=105 op=LOAD Jan 22 00:30:58.265000 audit[2780]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2644 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931623766383932363336326332616239333933393536303036376537 Jan 22 00:30:58.289440 containerd[1635]: time="2026-01-22T00:30:58.289160932Z" level=info msg="StartContainer for \"5f2676a1ec8e8c4173aa1711dde9ac4b54394a52760bae40de7f9ea86724de3b\" returns successfully" Jan 22 00:30:58.309000 audit: BPF prog-id=106 op=LOAD Jan 22 00:30:58.310000 audit: BPF prog-id=107 op=LOAD Jan 22 00:30:58.310000 audit[2774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.310000 audit: BPF prog-id=107 op=UNLOAD Jan 22 00:30:58.310000 audit[2774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.311000 audit: BPF prog-id=108 op=LOAD Jan 22 00:30:58.311000 audit[2774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.312000 audit: BPF prog-id=109 op=LOAD Jan 22 00:30:58.312000 audit[2774]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.312000 audit: BPF prog-id=109 op=UNLOAD Jan 22 00:30:58.312000 audit[2774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.313000 audit: BPF prog-id=108 op=UNLOAD Jan 22 00:30:58.313000 audit[2774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.313000 audit: BPF prog-id=110 op=LOAD Jan 22 00:30:58.313000 audit[2774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2648 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:30:58.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165306264346234343232353137376537373538343661323736393532 Jan 22 00:30:58.521644 containerd[1635]: time="2026-01-22T00:30:58.520153744Z" level=info msg="StartContainer for \"91b7f8926362c2ab93939560067e7bd690a195fe77e98668249f19a4a1503b08\" returns successfully" Jan 22 00:30:58.869917 containerd[1635]: time="2026-01-22T00:30:58.869087692Z" level=info msg="StartContainer for \"1e0bd4b44225177e775846a276952ca19d29aa7af9c95fd7a024866460e0d048\" returns successfully" Jan 22 00:30:58.961603 kubelet[2573]: E0122 00:30:58.960711 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:58.975936 kubelet[2573]: E0122 00:30:58.964448 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:58.975936 kubelet[2573]: E0122 00:30:58.966053 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:58.975936 kubelet[2573]: E0122 00:30:58.969527 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:30:58.988922 kubelet[2573]: E0122 00:30:58.988113 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:30:58.988922 kubelet[2573]: E0122 00:30:58.988296 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:00.004936 kubelet[2573]: E0122 00:31:00.004377 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:00.004936 kubelet[2573]: E0122 00:31:00.004612 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:00.006648 kubelet[2573]: E0122 00:31:00.006529 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:00.009937 kubelet[2573]: E0122 00:31:00.006681 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:01.024927 kubelet[2573]: E0122 00:31:01.024259 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:01.024927 kubelet[2573]: E0122 00:31:01.024541 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:01.026118 kubelet[2573]: E0122 00:31:01.025922 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:01.026118 kubelet[2573]: E0122 00:31:01.026061 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:01.081922 kubelet[2573]: I0122 00:31:01.081595 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:31:04.704449 kubelet[2573]: E0122 00:31:04.703694 2573 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 22 00:31:05.767547 kubelet[2573]: E0122 00:31:05.761261 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:05.767547 kubelet[2573]: E0122 00:31:05.763361 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:06.153776 kubelet[2573]: E0122 00:31:06.152713 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:06.153776 kubelet[2573]: E0122 00:31:06.153543 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:06.498466 kubelet[2573]: E0122 00:31:06.487362 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:06.498466 kubelet[2573]: E0122 00:31:06.488212 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:10.773325 kubelet[2573]: E0122 00:31:10.772470 2573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 22 00:31:11.006034 kubelet[2573]: E0122 00:31:10.984596 2573 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 00:31:11.148755 kubelet[2573]: E0122 00:31:11.146675 2573 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 22 00:31:11.690356 kubelet[2573]: E0122 00:31:11.686023 2573 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ce631a0d1a3b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,LastTimestamp:2026-01-22 00:30:54.475641776 +0000 UTC m=+1.054057083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 22 00:31:11.737221 kubelet[2573]: I0122 00:31:11.736625 2573 apiserver.go:52] "Watching apiserver" Jan 22 00:31:11.806070 kubelet[2573]: I0122 00:31:11.804521 2573 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 00:31:12.376575 kubelet[2573]: E0122 00:31:12.376209 2573 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 22 00:31:12.974565 kubelet[2573]: E0122 00:31:12.973597 2573 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 22 00:31:13.766090 kubelet[2573]: E0122 00:31:13.764075 2573 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 22 00:31:14.708036 kubelet[2573]: E0122 00:31:14.705277 2573 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 22 00:31:14.801220 kubelet[2573]: E0122 00:31:14.800401 2573 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 22 00:31:15.741204 kubelet[2573]: E0122 00:31:15.740546 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:15.744177 kubelet[2573]: E0122 00:31:15.743750 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:16.048341 kubelet[2573]: E0122 00:31:16.048299 2573 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 22 00:31:16.050615 kubelet[2573]: E0122 00:31:16.050585 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:17.225287 kubelet[2573]: E0122 00:31:17.225233 2573 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 22 00:31:17.557516 kubelet[2573]: I0122 00:31:17.555548 2573 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:31:17.590954 kubelet[2573]: I0122 00:31:17.590739 2573 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 22 00:31:17.592168 kubelet[2573]: E0122 00:31:17.590792 2573 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 22 00:31:17.595712 kubelet[2573]: I0122 00:31:17.595675 2573 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 22 00:31:17.663365 kubelet[2573]: E0122 00:31:17.663330 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:17.664371 kubelet[2573]: I0122 00:31:17.663587 2573 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:17.689655 kubelet[2573]: I0122 00:31:17.688452 2573 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 00:31:17.689655 kubelet[2573]: E0122 00:31:17.689638 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:17.709137 kubelet[2573]: E0122 00:31:17.709096 2573 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:18.043489 systemd[1]: Reload requested from client PID 2871 ('systemctl') (unit session-9.scope)... Jan 22 00:31:18.045205 systemd[1]: Reloading... Jan 22 00:31:18.346251 zram_generator::config[2920]: No configuration found. Jan 22 00:31:18.988212 systemd[1]: Reloading finished in 941 ms. Jan 22 00:31:19.091395 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:31:19.133793 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 00:31:19.134674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:31:19.135140 systemd[1]: kubelet.service: Consumed 4.832s CPU time, 128M memory peak. Jan 22 00:31:19.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:31:19.141768 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 22 00:31:19.142155 kernel: audit: type=1131 audit(1769041879.134:393): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:31:19.145540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:31:19.148000 audit: BPF prog-id=111 op=LOAD Jan 22 00:31:19.184260 kernel: audit: type=1334 audit(1769041879.148:394): prog-id=111 op=LOAD Jan 22 00:31:19.148000 audit: BPF prog-id=69 op=UNLOAD Jan 22 00:31:19.192688 kernel: audit: type=1334 audit(1769041879.148:395): prog-id=69 op=UNLOAD Jan 22 00:31:19.192758 kernel: audit: type=1334 audit(1769041879.148:396): prog-id=112 op=LOAD Jan 22 00:31:19.148000 audit: BPF prog-id=112 op=LOAD Jan 22 00:31:19.148000 audit: BPF prog-id=113 op=LOAD Jan 22 00:31:19.209306 kernel: audit: type=1334 audit(1769041879.148:397): prog-id=113 op=LOAD Jan 22 00:31:19.209401 kernel: audit: type=1334 audit(1769041879.148:398): prog-id=70 op=UNLOAD Jan 22 00:31:19.148000 audit: BPF prog-id=70 op=UNLOAD Jan 22 00:31:19.218585 kernel: audit: type=1334 audit(1769041879.148:399): prog-id=71 op=UNLOAD Jan 22 00:31:19.148000 audit: BPF prog-id=71 op=UNLOAD Jan 22 00:31:19.226885 kernel: audit: type=1334 audit(1769041879.150:400): prog-id=114 op=LOAD Jan 22 00:31:19.150000 audit: BPF prog-id=114 op=LOAD Jan 22 00:31:19.235351 kernel: audit: type=1334 audit(1769041879.150:401): prog-id=72 op=UNLOAD Jan 22 00:31:19.150000 audit: BPF prog-id=72 op=UNLOAD Jan 22 00:31:19.244528 kernel: audit: type=1334 audit(1769041879.150:402): prog-id=115 op=LOAD Jan 22 00:31:19.150000 audit: BPF prog-id=115 op=LOAD Jan 22 00:31:19.150000 audit: BPF prog-id=116 op=LOAD Jan 22 00:31:19.150000 audit: BPF prog-id=73 op=UNLOAD Jan 22 00:31:19.150000 audit: BPF prog-id=74 op=UNLOAD Jan 22 00:31:19.154000 audit: BPF prog-id=117 op=LOAD Jan 22 00:31:19.154000 audit: BPF prog-id=62 op=UNLOAD Jan 22 00:31:19.154000 audit: BPF prog-id=118 op=LOAD Jan 22 00:31:19.154000 audit: BPF prog-id=119 op=LOAD Jan 22 00:31:19.154000 audit: BPF prog-id=63 op=UNLOAD Jan 22 00:31:19.154000 audit: BPF prog-id=64 op=UNLOAD Jan 22 00:31:19.156000 audit: BPF prog-id=120 op=LOAD Jan 22 00:31:19.156000 audit: BPF prog-id=65 op=UNLOAD Jan 22 00:31:19.161000 audit: BPF prog-id=121 op=LOAD Jan 22 00:31:19.161000 audit: BPF prog-id=61 op=UNLOAD Jan 22 00:31:19.163000 audit: BPF prog-id=122 op=LOAD Jan 22 00:31:19.163000 audit: BPF prog-id=66 op=UNLOAD Jan 22 00:31:19.163000 audit: BPF prog-id=123 op=LOAD Jan 22 00:31:19.163000 audit: BPF prog-id=124 op=LOAD Jan 22 00:31:19.163000 audit: BPF prog-id=67 op=UNLOAD Jan 22 00:31:19.163000 audit: BPF prog-id=68 op=UNLOAD Jan 22 00:31:19.165000 audit: BPF prog-id=125 op=LOAD Jan 22 00:31:19.165000 audit: BPF prog-id=78 op=UNLOAD Jan 22 00:31:19.165000 audit: BPF prog-id=126 op=LOAD Jan 22 00:31:19.165000 audit: BPF prog-id=127 op=LOAD Jan 22 00:31:19.165000 audit: BPF prog-id=79 op=UNLOAD Jan 22 00:31:19.165000 audit: BPF prog-id=80 op=UNLOAD Jan 22 00:31:19.166000 audit: BPF prog-id=128 op=LOAD Jan 22 00:31:19.166000 audit: BPF prog-id=129 op=LOAD Jan 22 00:31:19.166000 audit: BPF prog-id=75 op=UNLOAD Jan 22 00:31:19.166000 audit: BPF prog-id=76 op=UNLOAD Jan 22 00:31:19.168000 audit: BPF prog-id=130 op=LOAD Jan 22 00:31:19.168000 audit: BPF prog-id=77 op=UNLOAD Jan 22 00:31:19.905588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:31:19.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:31:19.942794 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:31:21.206205 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:31:21.206205 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:31:21.206205 kubelet[2961]: I0122 00:31:21.206145 2961 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:31:21.247714 kubelet[2961]: I0122 00:31:21.247524 2961 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 22 00:31:21.247714 kubelet[2961]: I0122 00:31:21.247557 2961 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:31:21.247714 kubelet[2961]: I0122 00:31:21.247593 2961 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 22 00:31:21.247714 kubelet[2961]: I0122 00:31:21.247601 2961 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:31:21.253995 kubelet[2961]: I0122 00:31:21.248247 2961 server.go:956] "Client rotation is on, will bootstrap in background" Jan 22 00:31:21.253995 kubelet[2961]: I0122 00:31:21.251778 2961 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 00:31:21.265960 kubelet[2961]: I0122 00:31:21.263505 2961 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:31:21.375699 kubelet[2961]: I0122 00:31:21.374181 2961 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:31:21.418114 kubelet[2961]: I0122 00:31:21.416410 2961 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 22 00:31:21.418114 kubelet[2961]: I0122 00:31:21.417655 2961 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:31:21.418114 kubelet[2961]: I0122 00:31:21.417703 2961 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:31:21.418114 kubelet[2961]: I0122 00:31:21.418390 2961 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:31:21.434119 kubelet[2961]: I0122 00:31:21.418411 2961 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 00:31:21.434119 kubelet[2961]: I0122 00:31:21.418453 2961 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 22 00:31:21.434119 kubelet[2961]: I0122 00:31:21.433773 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:31:21.436375 kubelet[2961]: I0122 00:31:21.434788 2961 kubelet.go:475] "Attempting to sync node with API server" Jan 22 00:31:21.436375 kubelet[2961]: I0122 00:31:21.434972 2961 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:31:21.436375 kubelet[2961]: I0122 00:31:21.435157 2961 kubelet.go:387] "Adding apiserver pod source" Jan 22 00:31:21.436375 kubelet[2961]: I0122 00:31:21.435278 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:31:21.491627 kubelet[2961]: I0122 00:31:21.486426 2961 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:31:21.559985 kubelet[2961]: I0122 00:31:21.554210 2961 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 00:31:21.568512 kubelet[2961]: I0122 00:31:21.567950 2961 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 22 00:31:21.608619 kubelet[2961]: I0122 00:31:21.608584 2961 server.go:1262] "Started kubelet" Jan 22 00:31:21.613528 kubelet[2961]: I0122 00:31:21.613209 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:31:21.624510 kubelet[2961]: I0122 00:31:21.624362 2961 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:31:21.632410 kubelet[2961]: I0122 00:31:21.632374 2961 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 22 00:31:21.640686 kubelet[2961]: I0122 00:31:21.635509 2961 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 00:31:21.640686 kubelet[2961]: I0122 00:31:21.635706 2961 reconciler.go:29] "Reconciler: start to sync state" Jan 22 00:31:21.640686 kubelet[2961]: I0122 00:31:21.636142 2961 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:31:21.640686 kubelet[2961]: I0122 00:31:21.636373 2961 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 22 00:31:21.640686 kubelet[2961]: I0122 00:31:21.636633 2961 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:31:21.669010 kubelet[2961]: I0122 00:31:21.666705 2961 factory.go:223] Registration of the systemd container factory successfully Jan 22 00:31:21.669010 kubelet[2961]: I0122 00:31:21.667206 2961 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:31:21.672460 kubelet[2961]: I0122 00:31:21.670573 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:31:21.718203 kubelet[2961]: I0122 00:31:21.715755 2961 factory.go:223] Registration of the containerd container factory successfully Jan 22 00:31:21.730724 kubelet[2961]: I0122 00:31:21.725336 2961 server.go:310] "Adding debug handlers to kubelet server" Jan 22 00:31:21.763517 kubelet[2961]: I0122 00:31:21.757977 2961 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 22 00:31:21.775602 kubelet[2961]: E0122 00:31:21.773621 2961 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:31:21.989094 kubelet[2961]: I0122 00:31:21.983246 2961 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 22 00:31:21.989094 kubelet[2961]: I0122 00:31:21.983356 2961 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 22 00:31:21.989094 kubelet[2961]: I0122 00:31:21.983403 2961 kubelet.go:2427] "Starting kubelet main sync loop" Jan 22 00:31:21.989094 kubelet[2961]: E0122 00:31:21.983669 2961 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:31:22.102550 kubelet[2961]: E0122 00:31:22.099641 2961 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 22 00:31:22.322755 kubelet[2961]: E0122 00:31:22.321640 2961 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.445648 2961 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.445680 2961 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.445716 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446156 2961 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446178 2961 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446208 2961 policy_none.go:49] "None policy: Start" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446222 2961 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446241 2961 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446491 2961 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 22 00:31:22.447945 kubelet[2961]: I0122 00:31:22.446508 2961 policy_none.go:47] "Start" Jan 22 00:31:22.464158 kubelet[2961]: I0122 00:31:22.463532 2961 apiserver.go:52] "Watching apiserver" Jan 22 00:31:22.474264 kubelet[2961]: E0122 00:31:22.474227 2961 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 00:31:22.475263 kubelet[2961]: I0122 00:31:22.474561 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:31:22.475263 kubelet[2961]: I0122 00:31:22.474582 2961 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:31:22.480283 kubelet[2961]: I0122 00:31:22.477620 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:31:22.519418 kubelet[2961]: E0122 00:31:22.518686 2961 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:31:22.772003 kubelet[2961]: I0122 00:31:22.735521 2961 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 22 00:31:22.898498 kubelet[2961]: I0122 00:31:22.897557 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:22.898498 kubelet[2961]: I0122 00:31:22.897714 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 22 00:31:22.898498 kubelet[2961]: I0122 00:31:22.897975 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:22.898498 kubelet[2961]: I0122 00:31:22.898145 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:22.898498 kubelet[2961]: I0122 00:31:22.898526 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:31:22.905290 kubelet[2961]: I0122 00:31:22.898554 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:31:22.905290 kubelet[2961]: I0122 00:31:22.898575 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4d4d0e17fd226e378560c19b2cb09c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a4d4d0e17fd226e378560c19b2cb09c3\") " pod="kube-system/kube-apiserver-localhost" Jan 22 00:31:22.905290 kubelet[2961]: I0122 00:31:22.898602 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:22.905290 kubelet[2961]: I0122 00:31:22.898791 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:22.905290 kubelet[2961]: I0122 00:31:22.899703 2961 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 22 00:31:22.923134 kubelet[2961]: I0122 00:31:22.922141 2961 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 00:31:22.941436 kubelet[2961]: I0122 00:31:22.939417 2961 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:23.096441 kubelet[2961]: I0122 00:31:23.092611 2961 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 22 00:31:23.096441 kubelet[2961]: I0122 00:31:23.092708 2961 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 22 00:31:23.241444 kubelet[2961]: E0122 00:31:23.241189 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:23.241444 kubelet[2961]: E0122 00:31:23.241633 2961 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 22 00:31:23.251427 kubelet[2961]: E0122 00:31:23.242015 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:23.567294 kubelet[2961]: E0122 00:31:23.565296 2961 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 22 00:31:23.567294 kubelet[2961]: E0122 00:31:23.566782 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:23.874354 kubelet[2961]: I0122 00:31:23.872597 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.872504644 podStartE2EDuration="6.872504644s" podCreationTimestamp="2026-01-22 00:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:31:23.790023369 +0000 UTC m=+3.818928953" watchObservedRunningTime="2026-01-22 00:31:23.872504644 +0000 UTC m=+3.901410208" Jan 22 00:31:23.950767 kubelet[2961]: I0122 00:31:23.950691 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.950668388 podStartE2EDuration="6.950668388s" podCreationTimestamp="2026-01-22 00:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:31:23.949736599 +0000 UTC m=+3.978642164" watchObservedRunningTime="2026-01-22 00:31:23.950668388 +0000 UTC m=+3.979573952" Jan 22 00:31:24.305959 kubelet[2961]: E0122 00:31:24.304985 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:24.305959 kubelet[2961]: E0122 00:31:24.305505 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:24.315524 kubelet[2961]: E0122 00:31:24.310412 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:24.708501 kubelet[2961]: I0122 00:31:24.706290 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.706264195 podStartE2EDuration="7.706264195s" podCreationTimestamp="2026-01-22 00:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:31:24.149293147 +0000 UTC m=+4.178198711" watchObservedRunningTime="2026-01-22 00:31:24.706264195 +0000 UTC m=+4.735169779" Jan 22 00:31:24.804030 kubelet[2961]: I0122 00:31:24.803244 2961 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 22 00:31:24.805162 containerd[1635]: time="2026-01-22T00:31:24.805013585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 22 00:31:24.809748 kubelet[2961]: I0122 00:31:24.808438 2961 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 22 00:31:25.387787 kubelet[2961]: E0122 00:31:25.387545 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:25.409152 kubelet[2961]: E0122 00:31:25.397062 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:25.779170 systemd[1]: Created slice kubepods-besteffort-podff88e2b2_59b5_4c1c_9833_a5b6d8fd5fde.slice - libcontainer container kubepods-besteffort-podff88e2b2_59b5_4c1c_9833_a5b6d8fd5fde.slice. Jan 22 00:31:25.901340 kubelet[2961]: I0122 00:31:25.899431 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2zv\" (UniqueName: \"kubernetes.io/projected/ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde-kube-api-access-pg2zv\") pod \"kube-proxy-f68rh\" (UID: \"ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde\") " pod="kube-system/kube-proxy-f68rh" Jan 22 00:31:25.911379 kubelet[2961]: I0122 00:31:25.904763 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde-xtables-lock\") pod \"kube-proxy-f68rh\" (UID: \"ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde\") " pod="kube-system/kube-proxy-f68rh" Jan 22 00:31:25.911379 kubelet[2961]: I0122 00:31:25.906762 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde-lib-modules\") pod \"kube-proxy-f68rh\" (UID: \"ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde\") " pod="kube-system/kube-proxy-f68rh" Jan 22 00:31:25.916577 kubelet[2961]: I0122 00:31:25.916505 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde-kube-proxy\") pod \"kube-proxy-f68rh\" (UID: \"ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde\") " pod="kube-system/kube-proxy-f68rh" Jan 22 00:31:26.583651 kubelet[2961]: E0122 00:31:26.582595 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:26.657737 containerd[1635]: time="2026-01-22T00:31:26.647315861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f68rh,Uid:ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde,Namespace:kube-system,Attempt:0,}" Jan 22 00:31:27.116786 containerd[1635]: time="2026-01-22T00:31:27.115204191Z" level=info msg="connecting to shim 2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e" address="unix:///run/containerd/s/bcee96c3a19ab15857e839671a9a56934e73fc8b3ae503bb3cd1a044ed3b173a" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:31:27.688513 systemd[1]: Started cri-containerd-2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e.scope - libcontainer container 2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e. Jan 22 00:31:27.799244 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 22 00:31:27.809658 kernel: audit: type=1334 audit(1769041887.777:435): prog-id=131 op=LOAD Jan 22 00:31:27.777000 audit: BPF prog-id=131 op=LOAD Jan 22 00:31:27.878247 kernel: audit: type=1334 audit(1769041887.863:436): prog-id=132 op=LOAD Jan 22 00:31:27.951007 kernel: audit: type=1300 audit(1769041887.863:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.863000 audit: BPF prog-id=132 op=LOAD Jan 22 00:31:27.863000 audit[3038]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.994952 kernel: audit: type=1327 audit(1769041887.863:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.863000 audit: BPF prog-id=132 op=UNLOAD Jan 22 00:31:27.863000 audit[3038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.053063 kernel: audit: type=1334 audit(1769041887.863:437): prog-id=132 op=UNLOAD Jan 22 00:31:28.053302 kernel: audit: type=1300 audit(1769041887.863:437): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:28.098965 kernel: audit: type=1327 audit(1769041887.863:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.903000 audit: BPF prog-id=133 op=LOAD Jan 22 00:31:27.903000 audit[3038]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.138487 kernel: audit: type=1334 audit(1769041887.903:438): prog-id=133 op=LOAD Jan 22 00:31:28.139282 kernel: audit: type=1300 audit(1769041887.903:438): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.139342 kernel: audit: type=1327 audit(1769041887.903:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.903000 audit: BPF prog-id=134 op=LOAD Jan 22 00:31:27.903000 audit[3038]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.903000 audit: BPF prog-id=134 op=UNLOAD Jan 22 00:31:27.903000 audit[3038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.910000 audit: BPF prog-id=133 op=UNLOAD Jan 22 00:31:27.910000 audit[3038]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:27.910000 audit: BPF prog-id=135 op=LOAD Jan 22 00:31:27.910000 audit[3038]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3026 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:27.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261313938396537373734363933633933323865353635656331643638 Jan 22 00:31:28.225656 kubelet[2961]: I0122 00:31:28.224311 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tp9w\" (UniqueName: \"kubernetes.io/projected/de4a3f52-bb75-4986-a637-79bd00c0f84b-kube-api-access-7tp9w\") pod \"tigera-operator-65cdcdfd6d-fmnjg\" (UID: \"de4a3f52-bb75-4986-a637-79bd00c0f84b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fmnjg" Jan 22 00:31:28.225656 kubelet[2961]: I0122 00:31:28.224528 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de4a3f52-bb75-4986-a637-79bd00c0f84b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-fmnjg\" (UID: \"de4a3f52-bb75-4986-a637-79bd00c0f84b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fmnjg" Jan 22 00:31:28.239406 systemd[1]: Created slice kubepods-besteffort-podde4a3f52_bb75_4986_a637_79bd00c0f84b.slice - libcontainer container kubepods-besteffort-podde4a3f52_bb75_4986_a637_79bd00c0f84b.slice. Jan 22 00:31:28.292388 containerd[1635]: time="2026-01-22T00:31:28.292258892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f68rh,Uid:ff88e2b2-59b5-4c1c-9833-a5b6d8fd5fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e\"" Jan 22 00:31:28.301057 kubelet[2961]: E0122 00:31:28.295437 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:28.320994 containerd[1635]: time="2026-01-22T00:31:28.320711454Z" level=info msg="CreateContainer within sandbox \"2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 22 00:31:28.378696 containerd[1635]: time="2026-01-22T00:31:28.377789246Z" level=info msg="Container 331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:31:28.423301 containerd[1635]: time="2026-01-22T00:31:28.423078210Z" level=info msg="CreateContainer within sandbox \"2a1989e7774693c9328e565ec1d6862adbdcbfd25967c306e058531f7eaf830e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819\"" Jan 22 00:31:28.432317 containerd[1635]: time="2026-01-22T00:31:28.430426872Z" level=info msg="StartContainer for \"331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819\"" Jan 22 00:31:28.439272 containerd[1635]: time="2026-01-22T00:31:28.438775193Z" level=info msg="connecting to shim 331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819" address="unix:///run/containerd/s/bcee96c3a19ab15857e839671a9a56934e73fc8b3ae503bb3cd1a044ed3b173a" protocol=ttrpc version=3 Jan 22 00:31:28.586326 containerd[1635]: time="2026-01-22T00:31:28.584718655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fmnjg,Uid:de4a3f52-bb75-4986-a637-79bd00c0f84b,Namespace:tigera-operator,Attempt:0,}" Jan 22 00:31:28.719371 containerd[1635]: time="2026-01-22T00:31:28.719234728Z" level=info msg="connecting to shim 7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245" address="unix:///run/containerd/s/e7065fb3d3b2830994a53b82dcce129b3cf316d9f37d5661a059d7c0c2c3b42d" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:31:28.735619 systemd[1]: Started cri-containerd-331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819.scope - libcontainer container 331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819. Jan 22 00:31:28.858470 systemd[1]: Started cri-containerd-7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245.scope - libcontainer container 7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245. Jan 22 00:31:28.903000 audit: BPF prog-id=136 op=LOAD Jan 22 00:31:28.911000 audit: BPF prog-id=137 op=LOAD Jan 22 00:31:28.911000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=137 op=UNLOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=138 op=LOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=139 op=LOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=139 op=UNLOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=138 op=UNLOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.912000 audit: BPF prog-id=140 op=LOAD Jan 22 00:31:28.912000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3089 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761626636663566336364306134383631333530306566353934653534 Jan 22 00:31:28.913000 audit: BPF prog-id=141 op=LOAD Jan 22 00:31:28.913000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3026 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333316134646631383232613936303038616133386232366131633533 Jan 22 00:31:28.913000 audit: BPF prog-id=142 op=LOAD Jan 22 00:31:28.913000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3026 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333316134646631383232613936303038616133386232366131633533 Jan 22 00:31:28.913000 audit: BPF prog-id=142 op=UNLOAD Jan 22 00:31:28.913000 audit[3069]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333316134646631383232613936303038616133386232366131633533 Jan 22 00:31:28.913000 audit: BPF prog-id=141 op=UNLOAD Jan 22 00:31:28.913000 audit[3069]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3026 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333316134646631383232613936303038616133386232366131633533 Jan 22 00:31:28.913000 audit: BPF prog-id=143 op=LOAD Jan 22 00:31:28.913000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3026 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:28.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333316134646631383232613936303038616133386232366131633533 Jan 22 00:31:29.175329 containerd[1635]: time="2026-01-22T00:31:29.173304607Z" level=info msg="StartContainer for \"331a4df1822a96008aa38b26a1c539b5c4e66022efc1c421136cee15c0644819\" returns successfully" Jan 22 00:31:29.224555 containerd[1635]: time="2026-01-22T00:31:29.221779518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fmnjg,Uid:de4a3f52-bb75-4986-a637-79bd00c0f84b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245\"" Jan 22 00:31:29.233404 containerd[1635]: time="2026-01-22T00:31:29.232720225Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 22 00:31:29.881568 kubelet[2961]: E0122 00:31:29.881026 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:29.893574 kubelet[2961]: E0122 00:31:29.887200 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:30.076213 kubelet[2961]: E0122 00:31:30.072731 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:30.302983 kubelet[2961]: I0122 00:31:30.298725 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f68rh" podStartSLOduration=5.298696775 podStartE2EDuration="5.298696775s" podCreationTimestamp="2026-01-22 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:31:30.08223252 +0000 UTC m=+10.111138094" watchObservedRunningTime="2026-01-22 00:31:30.298696775 +0000 UTC m=+10.327602338" Jan 22 00:31:31.035000 audit[3185]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.035000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc78d927f0 a2=0 a3=7ffc78d927dc items=0 ppid=3099 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:31:31.065000 audit[3188]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.065000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc7f53c20 a2=0 a3=7ffdc7f53c0c items=0 ppid=3099 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:31:31.078000 audit[3186]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:31.078000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcebbecde0 a2=0 a3=5497445c99dbd8cc items=0 ppid=3099 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:31:31.124000 audit[3189]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.124000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe0aa94c0 a2=0 a3=7fffe0aa94ac items=0 ppid=3099 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:31:31.139000 audit[3193]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:31.139000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f8f0e00 a2=0 a3=7ffc3f8f0dec items=0 ppid=3099 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:31:31.156674 kubelet[2961]: E0122 00:31:31.155589 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:31.156674 kubelet[2961]: E0122 00:31:31.155600 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:31.161000 audit[3194]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:31.161000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd635c7040 a2=0 a3=7ffd635c702c items=0 ppid=3099 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:31:31.289217 kubelet[2961]: E0122 00:31:31.286774 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:31.293000 audit[3195]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.293000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd888c1f50 a2=0 a3=7ffd888c1f3c items=0 ppid=3099 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:31:31.363000 audit[3197]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.363000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc6a11d80 a2=0 a3=7fffc6a11d6c items=0 ppid=3099 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 22 00:31:31.477000 audit[3200]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.477000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcecd5ef50 a2=0 a3=7ffcecd5ef3c items=0 ppid=3099 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 22 00:31:31.506000 audit[3201]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.506000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd699c6440 a2=0 a3=7ffd699c642c items=0 ppid=3099 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:31:31.565000 audit[3203]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.565000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5330bcd0 a2=0 a3=7ffc5330bcbc items=0 ppid=3099 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:31:31.561317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362945578.mount: Deactivated successfully. Jan 22 00:31:31.574000 audit[3204]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.574000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce0901f90 a2=0 a3=7ffce0901f7c items=0 ppid=3099 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:31:31.607000 audit[3206]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.607000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeda4c9ab0 a2=0 a3=7ffeda4c9a9c items=0 ppid=3099 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:31.656000 audit[3209]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.656000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe1c7de9f0 a2=0 a3=7ffe1c7de9dc items=0 ppid=3099 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:31.669000 audit[3210]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.669000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb5c50d00 a2=0 a3=7fffb5c50cec items=0 ppid=3099 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:31:31.694000 audit[3212]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.694000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcdb6bdd50 a2=0 a3=7ffcdb6bdd3c items=0 ppid=3099 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:31:31.702000 audit[3213]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.702000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc90d275d0 a2=0 a3=7ffc90d275bc items=0 ppid=3099 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:31:31.725000 audit[3215]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.725000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2cf971b0 a2=0 a3=7ffd2cf9719c items=0 ppid=3099 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 22 00:31:31.783906 kubelet[2961]: E0122 00:31:31.775412 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:31.821000 audit[3218]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.821000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc09275e60 a2=0 a3=7ffc09275e4c items=0 ppid=3099 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 22 00:31:31.911000 audit[3221]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.911000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcafd4ab0 a2=0 a3=7ffdcafd4a9c items=0 ppid=3099 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 22 00:31:31.925000 audit[3222]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.925000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec4e64810 a2=0 a3=7ffec4e647fc items=0 ppid=3099 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:31:31.951000 audit[3224]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.951000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc44867670 a2=0 a3=7ffc4486765c items=0 ppid=3099 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:31.980000 audit[3227]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.980000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd04979160 a2=0 a3=7ffd0497914c items=0 ppid=3099 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:31.996000 audit[3228]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:31.996000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4f08a920 a2=0 a3=7ffe4f08a90c items=0 ppid=3099 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:31.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:31:32.026000 audit[3230]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:31:32.026000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe10189fa0 a2=0 a3=7ffe10189f8c items=0 ppid=3099 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:31:32.174116 kubelet[2961]: E0122 00:31:32.172949 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:31:32.286000 audit[3236]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:31:32.286000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6da85960 a2=0 a3=7ffe6da8594c items=0 ppid=3099 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:31:32.323000 audit[3236]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:31:32.323000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe6da85960 a2=0 a3=7ffe6da8594c items=0 ppid=3099 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:31:32.337000 audit[3241]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.337000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff6a7cedf0 a2=0 a3=7fff6a7ceddc items=0 ppid=3099 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:31:32.365000 audit[3243]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.365000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff17a896f0 a2=0 a3=7fff17a896dc items=0 ppid=3099 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 22 00:31:32.388000 audit[3246]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.388000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff64b3ab40 a2=0 a3=7fff64b3ab2c items=0 ppid=3099 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 22 00:31:32.394000 audit[3247]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.394000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdef171ac0 a2=0 a3=7ffdef171aac items=0 ppid=3099 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:31:32.417000 audit[3249]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.417000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd9bc1380 a2=0 a3=7ffdd9bc136c items=0 ppid=3099 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:31:32.430000 audit[3250]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.430000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd53c16740 a2=0 a3=7ffd53c1672c items=0 ppid=3099 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:31:32.477000 audit[3252]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.477000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc46da1920 a2=0 a3=7ffc46da190c items=0 ppid=3099 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:32.538000 audit[3255]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.538000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffedb5f9590 a2=0 a3=7ffedb5f957c items=0 ppid=3099 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:32.603000 audit[3256]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.603000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbc8f0b60 a2=0 a3=7ffdbc8f0b4c items=0 ppid=3099 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:31:32.699000 audit[3259]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.699000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff720ef1b0 a2=0 a3=7fff720ef19c items=0 ppid=3099 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:31:32.706000 audit[3260]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.706000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2d8b83b0 a2=0 a3=7ffe2d8b839c items=0 ppid=3099 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:31:32.755000 audit[3262]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.755000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff425a6600 a2=0 a3=7fff425a65ec items=0 ppid=3099 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 22 00:31:32.892998 kernel: kauditd_printk_skb: 166 callbacks suppressed Jan 22 00:31:32.893371 kernel: audit: type=1325 audit(1769041892.878:495): table=filter:93 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.878000 audit[3265]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.878000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcd6d6ed0 a2=0 a3=7fffcd6d6ebc items=0 ppid=3099 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.973499 kernel: audit: type=1300 audit(1769041892.878:495): arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcd6d6ed0 a2=0 a3=7fffcd6d6ebc items=0 ppid=3099 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 22 00:31:33.074542 kernel: audit: type=1327 audit(1769041892.878:495): proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 22 00:31:33.076054 kernel: audit: type=1325 audit(1769041892.921:496): table=filter:94 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.921000 audit[3268]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.921000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfdebfa20 a2=0 a3=7ffdfdebfa0c items=0 ppid=3099 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:32.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 22 00:31:33.172412 kernel: audit: type=1300 audit(1769041892.921:496): arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfdebfa20 a2=0 a3=7ffdfdebfa0c items=0 ppid=3099 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.172503 kernel: audit: type=1327 audit(1769041892.921:496): proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 22 00:31:33.172529 kernel: audit: type=1325 audit(1769041892.933:497): table=nat:95 family=10 entries=1 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.933000 audit[3269]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:32.933000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1f8af460 a2=0 a3=7ffe1f8af44c items=0 ppid=3099 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.228111 kernel: audit: type=1300 audit(1769041892.933:497): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1f8af460 a2=0 a3=7ffe1f8af44c items=0 ppid=3099 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.228285 kernel: audit: type=1327 audit(1769041892.933:497): proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:31:32.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:31:33.241966 kernel: audit: type=1325 audit(1769041893.119:498): table=nat:96 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.119000 audit[3271]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.119000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe1090940 a2=0 a3=7fffe109092c items=0 ppid=3099 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.119000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:33.206000 audit[3274]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.206000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5b59b3a0 a2=0 a3=7ffd5b59b38c items=0 ppid=3099 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:31:33.223000 audit[3275]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.223000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc08837df0 a2=0 a3=7ffc08837ddc items=0 ppid=3099 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:31:33.242000 audit[3277]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.242000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc304a8610 a2=0 a3=7ffc304a85fc items=0 ppid=3099 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:31:33.253000 audit[3278]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.253000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc495ac070 a2=0 a3=7ffc495ac05c items=0 ppid=3099 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:31:33.288000 audit[3282]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.288000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0a185230 a2=0 a3=7ffd0a18521c items=0 ppid=3099 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:31:33.320000 audit[3287]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:31:33.320000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe22deb9b0 a2=0 a3=7ffe22deb99c items=0 ppid=3099 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:31:33.339000 audit[3289]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:31:33.339000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdff5cef70 a2=0 a3=7ffdff5cef5c items=0 ppid=3099 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.339000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:31:33.340000 audit[3289]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:31:33.340000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdff5cef70 a2=0 a3=7ffdff5cef5c items=0 ppid=3099 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:33.340000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:31:40.072349 containerd[1635]: time="2026-01-22T00:31:40.071617412Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:31:40.078336 containerd[1635]: time="2026-01-22T00:31:40.074628698Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 22 00:31:40.079746 containerd[1635]: time="2026-01-22T00:31:40.079419861Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:31:40.087502 containerd[1635]: time="2026-01-22T00:31:40.087042924Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:31:40.088063 containerd[1635]: time="2026-01-22T00:31:40.087717246Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 10.854955982s" Jan 22 00:31:40.088063 containerd[1635]: time="2026-01-22T00:31:40.087997580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 22 00:31:40.120793 containerd[1635]: time="2026-01-22T00:31:40.120464797Z" level=info msg="CreateContainer within sandbox \"7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 22 00:31:40.173650 containerd[1635]: time="2026-01-22T00:31:40.173523551Z" level=info msg="Container 57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:31:40.182067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921939781.mount: Deactivated successfully. Jan 22 00:31:40.402164 containerd[1635]: time="2026-01-22T00:31:40.392509268Z" level=info msg="CreateContainer within sandbox \"7abf6f5f3cd0a48613500ef594e54d48c9d1ec96120e438ea83a8341547e2245\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171\"" Jan 22 00:31:40.413561 containerd[1635]: time="2026-01-22T00:31:40.409056958Z" level=info msg="StartContainer for \"57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171\"" Jan 22 00:31:40.413561 containerd[1635]: time="2026-01-22T00:31:40.413085636Z" level=info msg="connecting to shim 57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171" address="unix:///run/containerd/s/e7065fb3d3b2830994a53b82dcce129b3cf316d9f37d5661a059d7c0c2c3b42d" protocol=ttrpc version=3 Jan 22 00:31:40.654210 systemd[1]: Started cri-containerd-57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171.scope - libcontainer container 57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171. Jan 22 00:31:40.750000 audit: BPF prog-id=144 op=LOAD Jan 22 00:31:40.765027 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 22 00:31:40.765183 kernel: audit: type=1334 audit(1769041900.750:507): prog-id=144 op=LOAD Jan 22 00:31:40.753000 audit: BPF prog-id=145 op=LOAD Jan 22 00:31:40.790692 kernel: audit: type=1334 audit(1769041900.753:508): prog-id=145 op=LOAD Jan 22 00:31:40.791362 kernel: audit: type=1300 audit(1769041900.753:508): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.753000 audit[3290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.833042 kernel: audit: type=1327 audit(1769041900.753:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.870599 kernel: audit: type=1334 audit(1769041900.753:509): prog-id=145 op=UNLOAD Jan 22 00:31:40.882535 kernel: audit: type=1300 audit(1769041900.753:509): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.753000 audit: BPF prog-id=145 op=UNLOAD Jan 22 00:31:40.753000 audit[3290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.927943 kernel: audit: type=1327 audit(1769041900.753:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.753000 audit: BPF prog-id=146 op=LOAD Jan 22 00:31:40.980785 kernel: audit: type=1334 audit(1769041900.753:510): prog-id=146 op=LOAD Jan 22 00:31:40.981069 kernel: audit: type=1300 audit(1769041900.753:510): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.753000 audit[3290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:41.060086 kernel: audit: type=1327 audit(1769041900.753:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.753000 audit: BPF prog-id=147 op=LOAD Jan 22 00:31:40.753000 audit[3290]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.754000 audit: BPF prog-id=147 op=UNLOAD Jan 22 00:31:40.754000 audit[3290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.754000 audit: BPF prog-id=146 op=UNLOAD Jan 22 00:31:40.754000 audit[3290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:40.754000 audit: BPF prog-id=148 op=LOAD Jan 22 00:31:40.754000 audit[3290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3089 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:31:40.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343633636539666533366166386130643939356636323666356662 Jan 22 00:31:41.157645 containerd[1635]: time="2026-01-22T00:31:41.156449899Z" level=info msg="StartContainer for \"57463ce9fe36af8a0d995f626f5fbab8909ba4cf2e4331a3c2ce46dff9061171\" returns successfully" Jan 22 00:31:41.676149 kubelet[2961]: I0122 00:31:41.675972 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-fmnjg" podStartSLOduration=3.8112885629999997 podStartE2EDuration="14.675952777s" podCreationTimestamp="2026-01-22 00:31:27 +0000 UTC" firstStartedPulling="2026-01-22 00:31:29.230422602 +0000 UTC m=+9.259328166" lastFinishedPulling="2026-01-22 00:31:40.095086816 +0000 UTC m=+20.123992380" observedRunningTime="2026-01-22 00:31:41.675464064 +0000 UTC m=+21.704369628" watchObservedRunningTime="2026-01-22 00:31:41.675952777 +0000 UTC m=+21.704858341" Jan 22 00:31:58.140055 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 22 00:31:58.140985 kernel: audit: type=1106 audit(1769041918.106:515): pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:31:58.106000 audit[1853]: USER_END pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:31:58.108031 sudo[1853]: pam_unix(sudo:session): session closed for user root Jan 22 00:31:58.168198 sshd[1852]: Connection closed by 10.0.0.1 port 53474 Jan 22 00:31:58.172125 kernel: audit: type=1104 audit(1769041918.112:516): pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:31:58.112000 audit[1853]: CRED_DISP pid=1853 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:31:58.175577 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Jan 22 00:31:58.300308 kernel: audit: type=1106 audit(1769041918.219:517): pid=1849 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:31:58.219000 audit[1849]: USER_END pid=1849 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:31:58.237382 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:53474.service: Deactivated successfully. Jan 22 00:31:58.262242 systemd[1]: session-9.scope: Deactivated successfully. Jan 22 00:31:58.269074 systemd[1]: session-9.scope: Consumed 21.888s CPU time, 223.9M memory peak. Jan 22 00:31:58.433751 kernel: audit: type=1104 audit(1769041918.221:518): pid=1849 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:31:58.221000 audit[1849]: CRED_DISP pid=1849 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:31:58.431270 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Jan 22 00:31:58.441574 systemd-logind[1609]: Removed session 9. Jan 22 00:31:58.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:31:58.474936 kernel: audit: type=1131 audit(1769041918.239:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.25:22-10.0.0.1:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:32:03.348000 audit[3388]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:03.389379 kernel: audit: type=1325 audit(1769041923.348:520): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:03.348000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdf8e07790 a2=0 a3=7ffdf8e0777c items=0 ppid=3099 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:03.443347 kernel: audit: type=1300 audit(1769041923.348:520): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdf8e07790 a2=0 a3=7ffdf8e0777c items=0 ppid=3099 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:03.456073 kernel: audit: type=1327 audit(1769041923.348:520): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:03.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:03.392000 audit[3388]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:03.490682 kernel: audit: type=1325 audit(1769041923.392:521): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:03.490784 kernel: audit: type=1300 audit(1769041923.392:521): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf8e07790 a2=0 a3=0 items=0 ppid=3099 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:03.392000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdf8e07790 a2=0 a3=0 items=0 ppid=3099 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:03.554048 kernel: audit: type=1327 audit(1769041923.392:521): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:03.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:04.563000 audit[3390]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:04.582985 kernel: audit: type=1325 audit(1769041924.563:522): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:04.563000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7a5a9d00 a2=0 a3=7ffc7a5a9cec items=0 ppid=3099 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:04.647026 kernel: audit: type=1300 audit(1769041924.563:522): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7a5a9d00 a2=0 a3=7ffc7a5a9cec items=0 ppid=3099 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:04.647171 kernel: audit: type=1327 audit(1769041924.563:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:04.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:04.667108 kernel: audit: type=1325 audit(1769041924.586:523): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:04.586000 audit[3390]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:04.586000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7a5a9d00 a2=0 a3=0 items=0 ppid=3099 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:04.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:17.322142 kubelet[2961]: E0122 00:32:17.321762 2961 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.179s" Jan 22 00:32:24.268111 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 22 00:32:24.268654 kernel: audit: type=1325 audit(1769041944.205:524): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:24.205000 audit[3394]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:24.341096 kernel: audit: type=1300 audit(1769041944.205:524): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd054fa2e0 a2=0 a3=7ffd054fa2cc items=0 ppid=3099 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:24.205000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd054fa2e0 a2=0 a3=7ffd054fa2cc items=0 ppid=3099 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:24.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:24.378058 kernel: audit: type=1327 audit(1769041944.205:524): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:24.477000 audit[3394]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:24.521094 kernel: audit: type=1325 audit(1769041944.477:525): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:24.477000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd054fa2e0 a2=0 a3=0 items=0 ppid=3099 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:24.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:24.590080 kernel: audit: type=1300 audit(1769041944.477:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd054fa2e0 a2=0 a3=0 items=0 ppid=3099 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:24.590232 kernel: audit: type=1327 audit(1769041944.477:525): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:25.758000 audit[3396]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:25.806333 kernel: audit: type=1325 audit(1769041945.758:526): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:25.822337 kernel: audit: type=1300 audit(1769041945.758:526): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c77f760 a2=0 a3=7ffc3c77f74c items=0 ppid=3099 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:25.758000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c77f760 a2=0 a3=7ffc3c77f74c items=0 ppid=3099 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:25.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:25.931971 kernel: audit: type=1327 audit(1769041945.758:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:25.915000 audit[3396]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:25.979373 kernel: audit: type=1325 audit(1769041945.915:527): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:25.915000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3c77f760 a2=0 a3=0 items=0 ppid=3099 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:25.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:31.789052 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 22 00:32:31.793204 kernel: audit: type=1325 audit(1769041951.755:528): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:31.755000 audit[3400]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:31.755000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc0c49c630 a2=0 a3=7ffc0c49c61c items=0 ppid=3099 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:31.880376 kernel: audit: type=1300 audit(1769041951.755:528): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc0c49c630 a2=0 a3=7ffc0c49c61c items=0 ppid=3099 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:31.881362 kernel: audit: type=1327 audit(1769041951.755:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:31.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:31.927057 kernel: audit: type=1325 audit(1769041951.762:529): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:31.762000 audit[3400]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:31.991356 kernel: audit: type=1300 audit(1769041951.762:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0c49c630 a2=0 a3=0 items=0 ppid=3099 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:31.762000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0c49c630 a2=0 a3=0 items=0 ppid=3099 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:31.970102 systemd[1]: Created slice kubepods-besteffort-pod66c7b1ec_9452_495a_a443_178d8a87e06d.slice - libcontainer container kubepods-besteffort-pod66c7b1ec_9452_495a_a443_178d8a87e06d.slice. Jan 22 00:32:31.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:32.020073 kernel: audit: type=1327 audit(1769041951.762:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:32.116570 kubelet[2961]: I0122 00:32:32.111466 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66c7b1ec-9452-495a-a443-178d8a87e06d-tigera-ca-bundle\") pod \"calico-typha-7d86fd84-4xgtp\" (UID: \"66c7b1ec-9452-495a-a443-178d8a87e06d\") " pod="calico-system/calico-typha-7d86fd84-4xgtp" Jan 22 00:32:32.116570 kubelet[2961]: I0122 00:32:32.112156 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66c7b1ec-9452-495a-a443-178d8a87e06d-typha-certs\") pod \"calico-typha-7d86fd84-4xgtp\" (UID: \"66c7b1ec-9452-495a-a443-178d8a87e06d\") " pod="calico-system/calico-typha-7d86fd84-4xgtp" Jan 22 00:32:32.116570 kubelet[2961]: I0122 00:32:32.112187 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77t9j\" (UniqueName: \"kubernetes.io/projected/66c7b1ec-9452-495a-a443-178d8a87e06d-kube-api-access-77t9j\") pod \"calico-typha-7d86fd84-4xgtp\" (UID: \"66c7b1ec-9452-495a-a443-178d8a87e06d\") " pod="calico-system/calico-typha-7d86fd84-4xgtp" Jan 22 00:32:32.687117 kubelet[2961]: E0122 00:32:32.676281 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:32.687348 containerd[1635]: time="2026-01-22T00:32:32.677461936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d86fd84-4xgtp,Uid:66c7b1ec-9452-495a-a443-178d8a87e06d,Namespace:calico-system,Attempt:0,}" Jan 22 00:32:32.835074 systemd[1]: Created slice kubepods-besteffort-pod67529ce1_cada_4edd_902c_125d05dadc27.slice - libcontainer container kubepods-besteffort-pod67529ce1_cada_4edd_902c_125d05dadc27.slice. Jan 22 00:32:32.838258 kubelet[2961]: I0122 00:32:32.835761 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67529ce1-cada-4edd-902c-125d05dadc27-node-certs\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.855456 kubelet[2961]: I0122 00:32:32.841272 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-policysync\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.855456 kubelet[2961]: I0122 00:32:32.841406 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-var-lib-calico\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.855456 kubelet[2961]: I0122 00:32:32.841439 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-cni-bin-dir\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.855456 kubelet[2961]: I0122 00:32:32.841467 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-flexvol-driver-host\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.855456 kubelet[2961]: I0122 00:32:32.841589 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-lib-modules\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.863776 kubelet[2961]: I0122 00:32:32.841685 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5p8\" (UniqueName: \"kubernetes.io/projected/67529ce1-cada-4edd-902c-125d05dadc27-kube-api-access-ql5p8\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.863776 kubelet[2961]: I0122 00:32:32.841716 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-cni-log-dir\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.863776 kubelet[2961]: I0122 00:32:32.841737 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-cni-net-dir\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.863776 kubelet[2961]: I0122 00:32:32.841760 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-xtables-lock\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.863776 kubelet[2961]: I0122 00:32:32.841789 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67529ce1-cada-4edd-902c-125d05dadc27-tigera-ca-bundle\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:32.864197 kubelet[2961]: I0122 00:32:32.846161 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67529ce1-cada-4edd-902c-125d05dadc27-var-run-calico\") pod \"calico-node-h9m68\" (UID: \"67529ce1-cada-4edd-902c-125d05dadc27\") " pod="calico-system/calico-node-h9m68" Jan 22 00:32:33.001035 containerd[1635]: time="2026-01-22T00:32:33.000739545Z" level=info msg="connecting to shim cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b" address="unix:///run/containerd/s/ac8967c262645d8d4c29b211b0d73d2a7601a90ff6f29fd03883cc8ccbbfe49d" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:32:33.014142 kubelet[2961]: E0122 00:32:33.012231 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.014142 kubelet[2961]: W0122 00:32:33.012275 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.017033 kubelet[2961]: E0122 00:32:33.012587 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.014000 audit[3416]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:33.044362 kernel: audit: type=1325 audit(1769041953.014:530): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:33.014000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe717b20a0 a2=0 a3=7ffe717b208c items=0 ppid=3099 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.057101 kubelet[2961]: E0122 00:32:33.054215 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.057101 kubelet[2961]: W0122 00:32:33.054250 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.057101 kubelet[2961]: E0122 00:32:33.054280 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.077995 kubelet[2961]: E0122 00:32:33.076724 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:33.078964 kubelet[2961]: E0122 00:32:33.078738 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.078964 kubelet[2961]: W0122 00:32:33.078768 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.081070 kubelet[2961]: E0122 00:32:33.078791 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.149416 kernel: audit: type=1300 audit(1769041953.014:530): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe717b20a0 a2=0 a3=7ffe717b208c items=0 ppid=3099 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.149640 kernel: audit: type=1327 audit(1769041953.014:530): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:33.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:33.172765 kernel: audit: type=1325 audit(1769041953.049:531): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:33.049000 audit[3416]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.156169 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.175610 kubelet[2961]: W0122 00:32:33.156192 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.156218 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.159007 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.175610 kubelet[2961]: W0122 00:32:33.159023 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.159041 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.161139 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.175610 kubelet[2961]: W0122 00:32:33.161154 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.161177 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.175610 kubelet[2961]: E0122 00:32:33.165374 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.176593 kubelet[2961]: W0122 00:32:33.165390 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.165407 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.170199 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.176593 kubelet[2961]: W0122 00:32:33.170215 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.170232 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.171325 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.176593 kubelet[2961]: W0122 00:32:33.171338 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.171349 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.176593 kubelet[2961]: E0122 00:32:33.175357 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.176593 kubelet[2961]: W0122 00:32:33.175373 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.177101 kubelet[2961]: E0122 00:32:33.175391 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.049000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe717b20a0 a2=0 a3=0 items=0 ppid=3099 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:33.177423 kubelet[2961]: E0122 00:32:33.177377 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.177423 kubelet[2961]: W0122 00:32:33.177393 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.177423 kubelet[2961]: E0122 00:32:33.177410 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.195081 kubelet[2961]: E0122 00:32:33.194765 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.196127 kubelet[2961]: E0122 00:32:33.195140 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:33.199413 kubelet[2961]: W0122 00:32:33.197178 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.199413 kubelet[2961]: E0122 00:32:33.199373 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.202703 containerd[1635]: time="2026-01-22T00:32:33.202214395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h9m68,Uid:67529ce1-cada-4edd-902c-125d05dadc27,Namespace:calico-system,Attempt:0,}" Jan 22 00:32:33.203693 kubelet[2961]: E0122 00:32:33.203237 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.203693 kubelet[2961]: W0122 00:32:33.203388 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.203693 kubelet[2961]: E0122 00:32:33.203412 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.223017 kubelet[2961]: E0122 00:32:33.222060 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.223017 kubelet[2961]: W0122 00:32:33.222178 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.223017 kubelet[2961]: E0122 00:32:33.222215 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.226094 kubelet[2961]: E0122 00:32:33.225385 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.226094 kubelet[2961]: W0122 00:32:33.225580 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.226094 kubelet[2961]: E0122 00:32:33.225604 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.258794 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.265190 kubelet[2961]: W0122 00:32:33.259398 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.259441 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.263151 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.265190 kubelet[2961]: W0122 00:32:33.263168 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.263198 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.264029 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.265190 kubelet[2961]: W0122 00:32:33.264044 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.264066 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.265190 kubelet[2961]: E0122 00:32:33.264320 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.265750 kubelet[2961]: W0122 00:32:33.264337 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.265750 kubelet[2961]: E0122 00:32:33.264349 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.266628 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274123 kubelet[2961]: W0122 00:32:33.266645 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.266667 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.268779 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274123 kubelet[2961]: W0122 00:32:33.268793 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.269005 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.269258 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274123 kubelet[2961]: W0122 00:32:33.269269 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.269287 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274123 kubelet[2961]: E0122 00:32:33.269993 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274582 kubelet[2961]: W0122 00:32:33.270007 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274582 kubelet[2961]: E0122 00:32:33.270023 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274582 kubelet[2961]: E0122 00:32:33.271362 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274582 kubelet[2961]: W0122 00:32:33.271378 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274582 kubelet[2961]: E0122 00:32:33.271392 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.274582 kubelet[2961]: I0122 00:32:33.271420 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3f33826-c9a7-4e28-a985-814cedd1e52b-kubelet-dir\") pod \"csi-node-driver-kfg9f\" (UID: \"d3f33826-c9a7-4e28-a985-814cedd1e52b\") " pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:32:33.274582 kubelet[2961]: E0122 00:32:33.272049 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.274582 kubelet[2961]: W0122 00:32:33.272066 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.274582 kubelet[2961]: E0122 00:32:33.272088 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.277050 kubelet[2961]: I0122 00:32:33.272108 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnjm\" (UniqueName: \"kubernetes.io/projected/d3f33826-c9a7-4e28-a985-814cedd1e52b-kube-api-access-fpnjm\") pod \"csi-node-driver-kfg9f\" (UID: \"d3f33826-c9a7-4e28-a985-814cedd1e52b\") " pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:32:33.277050 kubelet[2961]: E0122 00:32:33.272382 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.277050 kubelet[2961]: W0122 00:32:33.272395 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.277050 kubelet[2961]: E0122 00:32:33.272410 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.277050 kubelet[2961]: I0122 00:32:33.272436 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3f33826-c9a7-4e28-a985-814cedd1e52b-varrun\") pod \"csi-node-driver-kfg9f\" (UID: \"d3f33826-c9a7-4e28-a985-814cedd1e52b\") " pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:32:33.277050 kubelet[2961]: E0122 00:32:33.272791 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.277050 kubelet[2961]: W0122 00:32:33.273057 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.277050 kubelet[2961]: E0122 00:32:33.273072 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.279428 kubelet[2961]: I0122 00:32:33.273091 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3f33826-c9a7-4e28-a985-814cedd1e52b-socket-dir\") pod \"csi-node-driver-kfg9f\" (UID: \"d3f33826-c9a7-4e28-a985-814cedd1e52b\") " pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:32:33.279428 kubelet[2961]: E0122 00:32:33.273354 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.279428 kubelet[2961]: W0122 00:32:33.273368 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.279428 kubelet[2961]: E0122 00:32:33.273386 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.279428 kubelet[2961]: I0122 00:32:33.273412 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3f33826-c9a7-4e28-a985-814cedd1e52b-registration-dir\") pod \"csi-node-driver-kfg9f\" (UID: \"d3f33826-c9a7-4e28-a985-814cedd1e52b\") " pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:32:33.279428 kubelet[2961]: E0122 00:32:33.273992 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.279428 kubelet[2961]: W0122 00:32:33.274006 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.279428 kubelet[2961]: E0122 00:32:33.274022 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.279428 kubelet[2961]: E0122 00:32:33.274629 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.280996 kubelet[2961]: W0122 00:32:33.274641 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.274653 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.275247 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.280996 kubelet[2961]: W0122 00:32:33.275259 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.275277 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.275628 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.280996 kubelet[2961]: W0122 00:32:33.275640 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.275661 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.280996 kubelet[2961]: E0122 00:32:33.276110 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.280996 kubelet[2961]: W0122 00:32:33.276122 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.276134 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.276381 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.281342 kubelet[2961]: W0122 00:32:33.276395 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.276407 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.276783 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.281342 kubelet[2961]: W0122 00:32:33.276970 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.276991 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.277458 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.281342 kubelet[2961]: W0122 00:32:33.277568 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281342 kubelet[2961]: E0122 00:32:33.277584 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.281792 kubelet[2961]: E0122 00:32:33.278128 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.281792 kubelet[2961]: W0122 00:32:33.278141 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281792 kubelet[2961]: E0122 00:32:33.278162 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.281792 kubelet[2961]: E0122 00:32:33.280074 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.281792 kubelet[2961]: W0122 00:32:33.280088 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.281792 kubelet[2961]: E0122 00:32:33.280108 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.387303 kubelet[2961]: E0122 00:32:33.385183 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.409984 kubelet[2961]: W0122 00:32:33.409773 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.410286 kubelet[2961]: E0122 00:32:33.410263 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.413610 kubelet[2961]: E0122 00:32:33.413243 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.413610 kubelet[2961]: W0122 00:32:33.413273 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.413610 kubelet[2961]: E0122 00:32:33.413293 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.421928 kubelet[2961]: E0122 00:32:33.421891 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.422778 containerd[1635]: time="2026-01-22T00:32:33.422351185Z" level=info msg="connecting to shim 060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271" address="unix:///run/containerd/s/39356ce56dbcdb71eb22ee925c9517c9e05605054c892197b46c1cd6d35d935c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:32:33.425781 kubelet[2961]: W0122 00:32:33.425748 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.427024 kubelet[2961]: E0122 00:32:33.426772 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.445973 kubelet[2961]: E0122 00:32:33.443045 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.445973 kubelet[2961]: W0122 00:32:33.443161 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.445973 kubelet[2961]: E0122 00:32:33.443193 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.450413 kubelet[2961]: E0122 00:32:33.450382 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.454599 kubelet[2961]: W0122 00:32:33.454454 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.457102 kubelet[2961]: E0122 00:32:33.457072 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.480066 kubelet[2961]: E0122 00:32:33.479018 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.488204 kubelet[2961]: W0122 00:32:33.480241 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.488204 kubelet[2961]: E0122 00:32:33.480285 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.488204 kubelet[2961]: E0122 00:32:33.481379 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.488204 kubelet[2961]: W0122 00:32:33.483210 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.488204 kubelet[2961]: E0122 00:32:33.483233 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.488204 kubelet[2961]: E0122 00:32:33.488151 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.488204 kubelet[2961]: W0122 00:32:33.488174 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.488204 kubelet[2961]: E0122 00:32:33.488199 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.497008 kubelet[2961]: E0122 00:32:33.496082 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.497008 kubelet[2961]: W0122 00:32:33.496193 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.497008 kubelet[2961]: E0122 00:32:33.496223 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.499246 kubelet[2961]: E0122 00:32:33.498793 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.499246 kubelet[2961]: W0122 00:32:33.498984 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.499246 kubelet[2961]: E0122 00:32:33.499003 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.506193 kubelet[2961]: E0122 00:32:33.505406 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.506193 kubelet[2961]: W0122 00:32:33.505603 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.506193 kubelet[2961]: E0122 00:32:33.505755 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.507081 kubelet[2961]: E0122 00:32:33.507060 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.507171 kubelet[2961]: W0122 00:32:33.507153 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.507257 kubelet[2961]: E0122 00:32:33.507242 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.508776 kubelet[2961]: E0122 00:32:33.508756 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.509222 kubelet[2961]: W0122 00:32:33.509201 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.509306 kubelet[2961]: E0122 00:32:33.509290 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.512040 kubelet[2961]: E0122 00:32:33.512017 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.514566 kubelet[2961]: W0122 00:32:33.514442 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.514681 kubelet[2961]: E0122 00:32:33.514663 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.530942 kubelet[2961]: E0122 00:32:33.520237 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.530942 kubelet[2961]: W0122 00:32:33.523748 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.530942 kubelet[2961]: E0122 00:32:33.523784 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.539990 kubelet[2961]: E0122 00:32:33.539028 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.539990 kubelet[2961]: W0122 00:32:33.539058 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.539990 kubelet[2961]: E0122 00:32:33.539083 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.540421 kubelet[2961]: E0122 00:32:33.540333 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.540421 kubelet[2961]: W0122 00:32:33.540345 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.540421 kubelet[2961]: E0122 00:32:33.540359 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.550058 kubelet[2961]: E0122 00:32:33.548091 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.550058 kubelet[2961]: W0122 00:32:33.548336 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.550058 kubelet[2961]: E0122 00:32:33.548371 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.559917 kubelet[2961]: E0122 00:32:33.558277 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.559917 kubelet[2961]: W0122 00:32:33.558305 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.559917 kubelet[2961]: E0122 00:32:33.558335 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.562605 systemd[1]: Started cri-containerd-cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b.scope - libcontainer container cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b. Jan 22 00:32:33.572133 kubelet[2961]: E0122 00:32:33.572023 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.572133 kubelet[2961]: W0122 00:32:33.572128 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.591351 kubelet[2961]: E0122 00:32:33.572162 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.591351 kubelet[2961]: E0122 00:32:33.580388 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.591351 kubelet[2961]: W0122 00:32:33.580407 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.591351 kubelet[2961]: E0122 00:32:33.580433 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.595645 kubelet[2961]: E0122 00:32:33.595448 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.595645 kubelet[2961]: W0122 00:32:33.595616 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.595645 kubelet[2961]: E0122 00:32:33.595642 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.602725 kubelet[2961]: E0122 00:32:33.596094 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.602725 kubelet[2961]: W0122 00:32:33.596109 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.602725 kubelet[2961]: E0122 00:32:33.596121 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.615960 kubelet[2961]: E0122 00:32:33.615280 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.615960 kubelet[2961]: W0122 00:32:33.615384 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.615960 kubelet[2961]: E0122 00:32:33.615413 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.622186 kubelet[2961]: E0122 00:32:33.619398 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.625744 kubelet[2961]: W0122 00:32:33.625172 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.625744 kubelet[2961]: E0122 00:32:33.625220 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.732180 kubelet[2961]: E0122 00:32:33.732130 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:33.732445 kubelet[2961]: W0122 00:32:33.732343 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:33.732445 kubelet[2961]: E0122 00:32:33.732384 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:33.857000 audit: BPF prog-id=149 op=LOAD Jan 22 00:32:33.861000 audit: BPF prog-id=150 op=LOAD Jan 22 00:32:33.861000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.861000 audit: BPF prog-id=150 op=UNLOAD Jan 22 00:32:33.861000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.868000 audit: BPF prog-id=151 op=LOAD Jan 22 00:32:33.868000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.868000 audit: BPF prog-id=152 op=LOAD Jan 22 00:32:33.868000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.868000 audit: BPF prog-id=152 op=UNLOAD Jan 22 00:32:33.868000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.868000 audit: BPF prog-id=151 op=UNLOAD Jan 22 00:32:33.868000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.879000 audit: BPF prog-id=153 op=LOAD Jan 22 00:32:33.879000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3414 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:33.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361623964306265383437326331383536343365323265353363643332 Jan 22 00:32:33.970322 systemd[1]: Started cri-containerd-060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271.scope - libcontainer container 060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271. Jan 22 00:32:34.256000 audit: BPF prog-id=154 op=LOAD Jan 22 00:32:34.274000 audit: BPF prog-id=155 op=LOAD Jan 22 00:32:34.274000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a4238 a2=98 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.280000 audit: BPF prog-id=155 op=UNLOAD Jan 22 00:32:34.280000 audit[3528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.308000 audit: BPF prog-id=156 op=LOAD Jan 22 00:32:34.308000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a4488 a2=98 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.308000 audit: BPF prog-id=157 op=LOAD Jan 22 00:32:34.308000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a4218 a2=98 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.308000 audit: BPF prog-id=157 op=UNLOAD Jan 22 00:32:34.308000 audit[3528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.308000 audit: BPF prog-id=156 op=UNLOAD Jan 22 00:32:34.308000 audit[3528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.309000 audit: BPF prog-id=158 op=LOAD Jan 22 00:32:34.309000 audit[3528]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a46e8 a2=98 a3=0 items=0 ppid=3494 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:34.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036306233633938333139666462306132376266616133303033653833 Jan 22 00:32:34.599988 containerd[1635]: time="2026-01-22T00:32:34.599738170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d86fd84-4xgtp,Uid:66c7b1ec-9452-495a-a443-178d8a87e06d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b\"" Jan 22 00:32:34.623794 kubelet[2961]: E0122 00:32:34.622380 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:34.640373 containerd[1635]: time="2026-01-22T00:32:34.637621062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 22 00:32:34.660253 containerd[1635]: time="2026-01-22T00:32:34.659668528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h9m68,Uid:67529ce1-cada-4edd-902c-125d05dadc27,Namespace:calico-system,Attempt:0,} returns sandbox id \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\"" Jan 22 00:32:34.685979 kubelet[2961]: E0122 00:32:34.685595 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:34.990709 kubelet[2961]: E0122 00:32:34.985335 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:36.429250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203662666.mount: Deactivated successfully. Jan 22 00:32:36.988088 kubelet[2961]: E0122 00:32:36.985140 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:38.989209 kubelet[2961]: E0122 00:32:38.988188 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:39.993725 kubelet[2961]: E0122 00:32:39.993082 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:40.074768 kubelet[2961]: E0122 00:32:40.074123 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.074768 kubelet[2961]: W0122 00:32:40.074229 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.074768 kubelet[2961]: E0122 00:32:40.074263 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.076200 kubelet[2961]: E0122 00:32:40.076023 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.076200 kubelet[2961]: W0122 00:32:40.076125 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.076200 kubelet[2961]: E0122 00:32:40.076144 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.083077 kubelet[2961]: E0122 00:32:40.083027 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.083077 kubelet[2961]: W0122 00:32:40.083057 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.083077 kubelet[2961]: E0122 00:32:40.083080 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.084168 kubelet[2961]: E0122 00:32:40.083524 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.084168 kubelet[2961]: W0122 00:32:40.083612 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.084168 kubelet[2961]: E0122 00:32:40.083627 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.087773 kubelet[2961]: E0122 00:32:40.086998 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.087773 kubelet[2961]: W0122 00:32:40.087096 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.087773 kubelet[2961]: E0122 00:32:40.087115 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.091736 kubelet[2961]: E0122 00:32:40.089382 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.091736 kubelet[2961]: W0122 00:32:40.089401 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.091736 kubelet[2961]: E0122 00:32:40.089420 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.091736 kubelet[2961]: E0122 00:32:40.090040 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.091736 kubelet[2961]: W0122 00:32:40.090052 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.091736 kubelet[2961]: E0122 00:32:40.090066 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.097397 kubelet[2961]: E0122 00:32:40.094018 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.097397 kubelet[2961]: W0122 00:32:40.094037 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.097397 kubelet[2961]: E0122 00:32:40.094051 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.097397 kubelet[2961]: E0122 00:32:40.094326 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.097397 kubelet[2961]: W0122 00:32:40.094337 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.097397 kubelet[2961]: E0122 00:32:40.094347 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.097787 kubelet[2961]: E0122 00:32:40.097677 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.097787 kubelet[2961]: W0122 00:32:40.097691 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.097787 kubelet[2961]: E0122 00:32:40.097707 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.098667 kubelet[2961]: E0122 00:32:40.098132 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.098667 kubelet[2961]: W0122 00:32:40.098231 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.098667 kubelet[2961]: E0122 00:32:40.098248 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.101058 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.106067 kubelet[2961]: W0122 00:32:40.101077 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.101091 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.101677 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.106067 kubelet[2961]: W0122 00:32:40.101690 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.101705 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.103205 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.106067 kubelet[2961]: W0122 00:32:40.103217 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.103229 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.106067 kubelet[2961]: E0122 00:32:40.103588 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.111334 kubelet[2961]: W0122 00:32:40.103599 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.103610 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.104047 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.111334 kubelet[2961]: W0122 00:32:40.104059 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.104070 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.104336 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.111334 kubelet[2961]: W0122 00:32:40.104346 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.104356 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.111334 kubelet[2961]: E0122 00:32:40.107083 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.111334 kubelet[2961]: W0122 00:32:40.107101 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.107118 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.107391 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115008 kubelet[2961]: W0122 00:32:40.107403 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.107421 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.107785 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115008 kubelet[2961]: W0122 00:32:40.107961 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.107980 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.108692 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115008 kubelet[2961]: W0122 00:32:40.108705 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115008 kubelet[2961]: E0122 00:32:40.108717 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.109149 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115263 kubelet[2961]: W0122 00:32:40.109160 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.109172 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.109531 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115263 kubelet[2961]: W0122 00:32:40.109544 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.109555 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.110271 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115263 kubelet[2961]: W0122 00:32:40.110283 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.110296 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.115263 kubelet[2961]: E0122 00:32:40.114544 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:40.115625 kubelet[2961]: W0122 00:32:40.114565 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:40.115625 kubelet[2961]: E0122 00:32:40.114582 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:40.988679 kubelet[2961]: E0122 00:32:40.988112 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:42.988652 kubelet[2961]: E0122 00:32:42.988015 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:44.064298 containerd[1635]: time="2026-01-22T00:32:44.064032914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:44.069582 containerd[1635]: time="2026-01-22T00:32:44.068715880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 22 00:32:44.072758 containerd[1635]: time="2026-01-22T00:32:44.072375634Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:44.084734 containerd[1635]: time="2026-01-22T00:32:44.084349168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:44.087591 containerd[1635]: time="2026-01-22T00:32:44.087232379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 9.449465607s" Jan 22 00:32:44.087591 containerd[1635]: time="2026-01-22T00:32:44.087267986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 22 00:32:44.099691 containerd[1635]: time="2026-01-22T00:32:44.099043803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 22 00:32:44.210103 containerd[1635]: time="2026-01-22T00:32:44.205544529Z" level=info msg="CreateContainer within sandbox \"cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 22 00:32:44.308627 containerd[1635]: time="2026-01-22T00:32:44.307981015Z" level=info msg="Container c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:32:44.310286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807506104.mount: Deactivated successfully. Jan 22 00:32:44.369399 containerd[1635]: time="2026-01-22T00:32:44.368592614Z" level=info msg="CreateContainer within sandbox \"cab9d0be8472c185643e22e53cd32f0a2d3df6b043977112de347cdd9be68c9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf\"" Jan 22 00:32:44.372226 containerd[1635]: time="2026-01-22T00:32:44.370727174Z" level=info msg="StartContainer for \"c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf\"" Jan 22 00:32:44.387539 containerd[1635]: time="2026-01-22T00:32:44.385269979Z" level=info msg="connecting to shim c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf" address="unix:///run/containerd/s/ac8967c262645d8d4c29b211b0d73d2a7601a90ff6f29fd03883cc8ccbbfe49d" protocol=ttrpc version=3 Jan 22 00:32:44.563609 systemd[1]: Started cri-containerd-c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf.scope - libcontainer container c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf. Jan 22 00:32:44.717000 audit: BPF prog-id=159 op=LOAD Jan 22 00:32:44.736031 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 22 00:32:44.736204 kernel: audit: type=1334 audit(1769041964.717:548): prog-id=159 op=LOAD Jan 22 00:32:44.739664 kernel: audit: type=1334 audit(1769041964.726:549): prog-id=160 op=LOAD Jan 22 00:32:44.726000 audit: BPF prog-id=160 op=LOAD Jan 22 00:32:44.726000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.812752 kernel: audit: type=1300 audit(1769041964.726:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.855173 kernel: audit: type=1327 audit(1769041964.726:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.726000 audit: BPF prog-id=160 op=UNLOAD Jan 22 00:32:44.726000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.900142 kernel: audit: type=1334 audit(1769041964.726:550): prog-id=160 op=UNLOAD Jan 22 00:32:44.900276 kernel: audit: type=1300 audit(1769041964.726:550): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.938198 kernel: audit: type=1327 audit(1769041964.726:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.727000 audit: BPF prog-id=161 op=LOAD Jan 22 00:32:44.727000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.994137 kernel: audit: type=1334 audit(1769041964.727:551): prog-id=161 op=LOAD Jan 22 00:32:44.994284 kernel: audit: type=1300 audit(1769041964.727:551): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.994333 kernel: audit: type=1327 audit(1769041964.727:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.995557 kubelet[2961]: E0122 00:32:44.992024 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:44.727000 audit: BPF prog-id=162 op=LOAD Jan 22 00:32:44.727000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.727000 audit: BPF prog-id=162 op=UNLOAD Jan 22 00:32:44.727000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.727000 audit: BPF prog-id=161 op=UNLOAD Jan 22 00:32:44.727000 audit[3611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:44.727000 audit: BPF prog-id=163 op=LOAD Jan 22 00:32:44.727000 audit[3611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3414 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:44.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330313662383065323962313861383465336334353933313836656364 Jan 22 00:32:45.043722 containerd[1635]: time="2026-01-22T00:32:45.041768674Z" level=info msg="StartContainer for \"c016b80e29b18a84e3c4593186ecd970cb59a901cc5f0d2f8ed0f7483c0404cf\" returns successfully" Jan 22 00:32:45.253102 kubelet[2961]: E0122 00:32:45.252657 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:45.284721 kubelet[2961]: E0122 00:32:45.284593 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.284721 kubelet[2961]: W0122 00:32:45.284623 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.284721 kubelet[2961]: E0122 00:32:45.284647 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.306575 kubelet[2961]: E0122 00:32:45.306526 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.306743 kubelet[2961]: W0122 00:32:45.306721 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.307030 kubelet[2961]: E0122 00:32:45.307007 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.309977 kubelet[2961]: E0122 00:32:45.308784 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.310129 kubelet[2961]: W0122 00:32:45.310104 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.310230 kubelet[2961]: E0122 00:32:45.310211 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.314666 kubelet[2961]: E0122 00:32:45.314608 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.314666 kubelet[2961]: W0122 00:32:45.314635 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.314666 kubelet[2961]: E0122 00:32:45.314661 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.318187 kubelet[2961]: E0122 00:32:45.317765 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.318187 kubelet[2961]: W0122 00:32:45.318080 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.318187 kubelet[2961]: E0122 00:32:45.318105 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.321768 kubelet[2961]: E0122 00:32:45.320023 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.321768 kubelet[2961]: W0122 00:32:45.320123 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.321768 kubelet[2961]: E0122 00:32:45.320140 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.321768 kubelet[2961]: E0122 00:32:45.321588 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.321768 kubelet[2961]: W0122 00:32:45.321600 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.321768 kubelet[2961]: E0122 00:32:45.321613 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.323788 kubelet[2961]: E0122 00:32:45.322752 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.323788 kubelet[2961]: W0122 00:32:45.322767 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.323788 kubelet[2961]: E0122 00:32:45.322780 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.331051 kubelet[2961]: E0122 00:32:45.330668 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.331051 kubelet[2961]: W0122 00:32:45.330692 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.331051 kubelet[2961]: E0122 00:32:45.330709 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.341021 kubelet[2961]: E0122 00:32:45.338649 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.341021 kubelet[2961]: W0122 00:32:45.338760 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.341021 kubelet[2961]: E0122 00:32:45.338781 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.341021 kubelet[2961]: E0122 00:32:45.340316 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.341021 kubelet[2961]: W0122 00:32:45.340329 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.341021 kubelet[2961]: E0122 00:32:45.340344 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.342227 kubelet[2961]: E0122 00:32:45.342116 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.342287 kubelet[2961]: W0122 00:32:45.342233 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.342287 kubelet[2961]: E0122 00:32:45.342253 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.344723 kubelet[2961]: E0122 00:32:45.344612 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.344723 kubelet[2961]: W0122 00:32:45.344635 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.344723 kubelet[2961]: E0122 00:32:45.344652 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.347782 kubelet[2961]: E0122 00:32:45.347244 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.347782 kubelet[2961]: W0122 00:32:45.347335 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.347782 kubelet[2961]: E0122 00:32:45.347355 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.352066 kubelet[2961]: E0122 00:32:45.351183 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.352066 kubelet[2961]: W0122 00:32:45.351295 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.352066 kubelet[2961]: E0122 00:32:45.351313 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.431566 kubelet[2961]: E0122 00:32:45.430193 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.431566 kubelet[2961]: W0122 00:32:45.430235 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.431566 kubelet[2961]: E0122 00:32:45.430266 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.432557 kubelet[2961]: E0122 00:32:45.432186 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.432557 kubelet[2961]: W0122 00:32:45.432202 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.432557 kubelet[2961]: E0122 00:32:45.432220 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.450600 kubelet[2961]: E0122 00:32:45.442629 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.450600 kubelet[2961]: W0122 00:32:45.442730 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.450600 kubelet[2961]: E0122 00:32:45.442753 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.450600 kubelet[2961]: E0122 00:32:45.449260 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.450600 kubelet[2961]: W0122 00:32:45.449281 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.450600 kubelet[2961]: E0122 00:32:45.449301 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.454289 kubelet[2961]: E0122 00:32:45.453678 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.454289 kubelet[2961]: W0122 00:32:45.453788 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.454289 kubelet[2961]: E0122 00:32:45.453980 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.456071 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.457312 kubelet[2961]: W0122 00:32:45.456088 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.456104 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.456738 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.457312 kubelet[2961]: W0122 00:32:45.456750 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.456762 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.457214 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.457312 kubelet[2961]: W0122 00:32:45.457227 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.457312 kubelet[2961]: E0122 00:32:45.457241 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.461659 kubelet[2961]: E0122 00:32:45.460553 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.461659 kubelet[2961]: W0122 00:32:45.460566 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.461659 kubelet[2961]: E0122 00:32:45.460580 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.461659 kubelet[2961]: E0122 00:32:45.461023 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.461659 kubelet[2961]: W0122 00:32:45.461034 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.461659 kubelet[2961]: E0122 00:32:45.461046 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.464579 kubelet[2961]: E0122 00:32:45.464015 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.464579 kubelet[2961]: W0122 00:32:45.464028 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.464579 kubelet[2961]: E0122 00:32:45.464041 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.469652 kubelet[2961]: E0122 00:32:45.468676 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.469652 kubelet[2961]: W0122 00:32:45.468694 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.469652 kubelet[2961]: E0122 00:32:45.468709 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.475966 kubelet[2961]: E0122 00:32:45.475712 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.475966 kubelet[2961]: W0122 00:32:45.475737 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.475966 kubelet[2961]: E0122 00:32:45.475756 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.478239 kubelet[2961]: E0122 00:32:45.477674 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.478239 kubelet[2961]: W0122 00:32:45.478217 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.478239 kubelet[2961]: E0122 00:32:45.478239 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.487146 kubelet[2961]: E0122 00:32:45.483186 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.487146 kubelet[2961]: W0122 00:32:45.483291 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.487146 kubelet[2961]: E0122 00:32:45.483309 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.487146 kubelet[2961]: E0122 00:32:45.485509 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.487146 kubelet[2961]: W0122 00:32:45.485522 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.487146 kubelet[2961]: E0122 00:32:45.485536 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.489700 kubelet[2961]: E0122 00:32:45.488354 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.489700 kubelet[2961]: W0122 00:32:45.488556 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.489700 kubelet[2961]: E0122 00:32:45.488577 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.489700 kubelet[2961]: E0122 00:32:45.489084 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:45.489700 kubelet[2961]: W0122 00:32:45.489098 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:45.489700 kubelet[2961]: E0122 00:32:45.489109 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:45.841759 containerd[1635]: time="2026-01-22T00:32:45.839236975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:45.847993 containerd[1635]: time="2026-01-22T00:32:45.846065357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 22 00:32:45.853987 containerd[1635]: time="2026-01-22T00:32:45.853716776Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:45.869196 containerd[1635]: time="2026-01-22T00:32:45.869118655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:32:45.872979 containerd[1635]: time="2026-01-22T00:32:45.871713814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.772423965s" Jan 22 00:32:45.872979 containerd[1635]: time="2026-01-22T00:32:45.871755751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 22 00:32:45.911155 containerd[1635]: time="2026-01-22T00:32:45.911101106Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 22 00:32:46.037000 containerd[1635]: time="2026-01-22T00:32:46.028138535Z" level=info msg="Container 03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:32:46.104538 containerd[1635]: time="2026-01-22T00:32:46.103263025Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517\"" Jan 22 00:32:46.113610 containerd[1635]: time="2026-01-22T00:32:46.113069704Z" level=info msg="StartContainer for \"03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517\"" Jan 22 00:32:46.139939 containerd[1635]: time="2026-01-22T00:32:46.139317640Z" level=info msg="connecting to shim 03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517" address="unix:///run/containerd/s/39356ce56dbcdb71eb22ee925c9517c9e05605054c892197b46c1cd6d35d935c" protocol=ttrpc version=3 Jan 22 00:32:46.275143 systemd[1]: Started cri-containerd-03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517.scope - libcontainer container 03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517. Jan 22 00:32:46.320288 kubelet[2961]: E0122 00:32:46.320242 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:46.383015 kubelet[2961]: E0122 00:32:46.382200 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.383015 kubelet[2961]: W0122 00:32:46.382314 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.383015 kubelet[2961]: E0122 00:32:46.382352 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.384994 kubelet[2961]: E0122 00:32:46.384212 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.384994 kubelet[2961]: W0122 00:32:46.384234 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.384994 kubelet[2961]: E0122 00:32:46.384247 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.384994 kubelet[2961]: E0122 00:32:46.384591 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.384994 kubelet[2961]: W0122 00:32:46.384604 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.384994 kubelet[2961]: E0122 00:32:46.384618 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.390993 kubelet[2961]: E0122 00:32:46.386751 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.390993 kubelet[2961]: W0122 00:32:46.387321 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.390993 kubelet[2961]: E0122 00:32:46.387341 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.397962 kubelet[2961]: E0122 00:32:46.396209 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.397962 kubelet[2961]: W0122 00:32:46.396333 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.397962 kubelet[2961]: E0122 00:32:46.396350 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.399612 kubelet[2961]: E0122 00:32:46.398972 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.399612 kubelet[2961]: W0122 00:32:46.398992 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.399612 kubelet[2961]: E0122 00:32:46.399008 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.401986 kubelet[2961]: E0122 00:32:46.400526 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.401986 kubelet[2961]: W0122 00:32:46.400550 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.401986 kubelet[2961]: E0122 00:32:46.400563 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.403022 kubelet[2961]: E0122 00:32:46.402191 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.403022 kubelet[2961]: W0122 00:32:46.402214 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.403022 kubelet[2961]: E0122 00:32:46.402226 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.405980 kubelet[2961]: E0122 00:32:46.405131 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.405980 kubelet[2961]: W0122 00:32:46.405153 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.405980 kubelet[2961]: E0122 00:32:46.405166 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.407607 kubelet[2961]: E0122 00:32:46.407293 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.407607 kubelet[2961]: W0122 00:32:46.407605 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.407703 kubelet[2961]: E0122 00:32:46.407624 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.415245 kubelet[2961]: E0122 00:32:46.412486 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.415245 kubelet[2961]: W0122 00:32:46.412583 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.415245 kubelet[2961]: E0122 00:32:46.412602 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.415245 kubelet[2961]: E0122 00:32:46.415181 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.415245 kubelet[2961]: W0122 00:32:46.415194 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.415245 kubelet[2961]: E0122 00:32:46.415208 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.417263 kubelet[2961]: E0122 00:32:46.416965 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.417263 kubelet[2961]: W0122 00:32:46.417260 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.417373 kubelet[2961]: E0122 00:32:46.417280 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.425792 kubelet[2961]: E0122 00:32:46.420563 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.425792 kubelet[2961]: W0122 00:32:46.420585 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.425792 kubelet[2961]: E0122 00:32:46.420599 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.429673 kubelet[2961]: E0122 00:32:46.426390 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.429673 kubelet[2961]: W0122 00:32:46.426665 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.429673 kubelet[2961]: E0122 00:32:46.426682 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.452946 kubelet[2961]: I0122 00:32:46.451054 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d86fd84-4xgtp" podStartSLOduration=5.994685671 podStartE2EDuration="15.451028828s" podCreationTimestamp="2026-01-22 00:32:31 +0000 UTC" firstStartedPulling="2026-01-22 00:32:34.634240823 +0000 UTC m=+74.663146386" lastFinishedPulling="2026-01-22 00:32:44.09058398 +0000 UTC m=+84.119489543" observedRunningTime="2026-01-22 00:32:45.339778198 +0000 UTC m=+85.368683762" watchObservedRunningTime="2026-01-22 00:32:46.451028828 +0000 UTC m=+86.479934392" Jan 22 00:32:46.519165 kubelet[2961]: E0122 00:32:46.519122 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.521065 kubelet[2961]: W0122 00:32:46.521033 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.521219 kubelet[2961]: E0122 00:32:46.521197 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.526521 kubelet[2961]: E0122 00:32:46.526492 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.526633 kubelet[2961]: W0122 00:32:46.526614 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.527046 kubelet[2961]: E0122 00:32:46.527021 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.532043 kubelet[2961]: E0122 00:32:46.532022 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.532305 kubelet[2961]: W0122 00:32:46.532141 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.532305 kubelet[2961]: E0122 00:32:46.532167 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.541899 kubelet[2961]: E0122 00:32:46.541115 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.541899 kubelet[2961]: W0122 00:32:46.541139 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.541899 kubelet[2961]: E0122 00:32:46.541160 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.544899 kubelet[2961]: E0122 00:32:46.543382 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.544899 kubelet[2961]: W0122 00:32:46.544283 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.544899 kubelet[2961]: E0122 00:32:46.544318 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.548765 kubelet[2961]: E0122 00:32:46.547295 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.548765 kubelet[2961]: W0122 00:32:46.547481 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.548765 kubelet[2961]: E0122 00:32:46.547499 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.553572 kubelet[2961]: E0122 00:32:46.553160 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.553572 kubelet[2961]: W0122 00:32:46.553266 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.553572 kubelet[2961]: E0122 00:32:46.553285 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.558049 kubelet[2961]: E0122 00:32:46.557382 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.558049 kubelet[2961]: W0122 00:32:46.557499 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.558049 kubelet[2961]: E0122 00:32:46.557518 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.559940 kubelet[2961]: E0122 00:32:46.559706 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.559940 kubelet[2961]: W0122 00:32:46.559732 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.559940 kubelet[2961]: E0122 00:32:46.559747 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.565301 kubelet[2961]: E0122 00:32:46.562040 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.565301 kubelet[2961]: W0122 00:32:46.562062 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.565301 kubelet[2961]: E0122 00:32:46.562076 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.568056 kubelet[2961]: E0122 00:32:46.566529 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.568056 kubelet[2961]: W0122 00:32:46.566630 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.568056 kubelet[2961]: E0122 00:32:46.566650 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.568056 kubelet[2961]: E0122 00:32:46.567102 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.568056 kubelet[2961]: W0122 00:32:46.567114 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.568056 kubelet[2961]: E0122 00:32:46.567129 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.573584 kubelet[2961]: E0122 00:32:46.573373 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.573584 kubelet[2961]: W0122 00:32:46.573498 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.573584 kubelet[2961]: E0122 00:32:46.573519 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.574303 kubelet[2961]: E0122 00:32:46.574258 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.574303 kubelet[2961]: W0122 00:32:46.574271 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.574303 kubelet[2961]: E0122 00:32:46.574283 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.588547 kubelet[2961]: E0122 00:32:46.587754 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.588547 kubelet[2961]: W0122 00:32:46.587780 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.588547 kubelet[2961]: E0122 00:32:46.587954 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.591000 audit[3730]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:46.591000 audit[3730]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeacba1840 a2=0 a3=7ffeacba182c items=0 ppid=3099 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:46.604782 kubelet[2961]: E0122 00:32:46.603702 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.604782 kubelet[2961]: W0122 00:32:46.603734 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.604782 kubelet[2961]: E0122 00:32:46.603765 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.603000 audit[3730]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:32:46.603000 audit[3730]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffeacba1840 a2=0 a3=7ffeacba182c items=0 ppid=3099 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:32:46.611704 kubelet[2961]: E0122 00:32:46.610092 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.611704 kubelet[2961]: W0122 00:32:46.610114 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.611704 kubelet[2961]: E0122 00:32:46.610139 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.617178 kubelet[2961]: E0122 00:32:46.613628 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:32:46.619338 kubelet[2961]: W0122 00:32:46.617541 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:32:46.619338 kubelet[2961]: E0122 00:32:46.617654 2961 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:32:46.702000 audit: BPF prog-id=164 op=LOAD Jan 22 00:32:46.702000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3494 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643239653839303632636135306634396634316365393630386530 Jan 22 00:32:46.703000 audit: BPF prog-id=165 op=LOAD Jan 22 00:32:46.703000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3494 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643239653839303632636135306634396634316365393630386530 Jan 22 00:32:46.703000 audit: BPF prog-id=165 op=UNLOAD Jan 22 00:32:46.703000 audit[3687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643239653839303632636135306634396634316365393630386530 Jan 22 00:32:46.703000 audit: BPF prog-id=164 op=UNLOAD Jan 22 00:32:46.703000 audit[3687]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643239653839303632636135306634396634316365393630386530 Jan 22 00:32:46.703000 audit: BPF prog-id=166 op=LOAD Jan 22 00:32:46.703000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3494 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:32:46.703000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033643239653839303632636135306634396634316365393630386530 Jan 22 00:32:46.930369 containerd[1635]: time="2026-01-22T00:32:46.929685075Z" level=info msg="StartContainer for \"03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517\" returns successfully" Jan 22 00:32:46.973617 systemd[1]: cri-containerd-03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517.scope: Deactivated successfully. Jan 22 00:32:46.988984 kubelet[2961]: E0122 00:32:46.985759 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:46.991120 kubelet[2961]: E0122 00:32:46.990115 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:46.992000 audit: BPF prog-id=166 op=UNLOAD Jan 22 00:32:47.032717 containerd[1635]: time="2026-01-22T00:32:47.032203590Z" level=info msg="received container exit event container_id:\"03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517\" id:\"03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517\" pid:3700 exited_at:{seconds:1769041967 nanos:24220795}" Jan 22 00:32:47.361184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03d29e89062ca50f49f41ce9608e0a0fc33f7fc2545b663b257c6b152c19d517-rootfs.mount: Deactivated successfully. Jan 22 00:32:47.383993 kubelet[2961]: E0122 00:32:47.383655 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:47.386055 kubelet[2961]: E0122 00:32:47.385975 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:48.405742 kubelet[2961]: E0122 00:32:48.403078 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:48.407658 containerd[1635]: time="2026-01-22T00:32:48.405322407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 22 00:32:48.408336 kubelet[2961]: E0122 00:32:48.406082 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:49.074171 kubelet[2961]: E0122 00:32:49.073322 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:50.990535 kubelet[2961]: E0122 00:32:50.985319 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:52.016210 kubelet[2961]: E0122 00:32:52.014118 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:52.988465 kubelet[2961]: E0122 00:32:52.985041 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:54.993303 kubelet[2961]: E0122 00:32:54.987272 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:56.986032 kubelet[2961]: E0122 00:32:56.985972 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:32:57.987013 kubelet[2961]: E0122 00:32:57.985120 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:32:58.989528 kubelet[2961]: E0122 00:32:58.989468 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:01.002471 kubelet[2961]: E0122 00:33:01.001632 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:02.998788 kubelet[2961]: E0122 00:33:02.988675 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:04.025887 containerd[1635]: time="2026-01-22T00:33:04.025312918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:33:04.034617 containerd[1635]: time="2026-01-22T00:33:04.034497794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445948" Jan 22 00:33:04.038958 containerd[1635]: time="2026-01-22T00:33:04.038740941Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:33:04.049032 containerd[1635]: time="2026-01-22T00:33:04.048762918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:33:04.049629 containerd[1635]: time="2026-01-22T00:33:04.049488291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 15.64386627s" Jan 22 00:33:04.049629 containerd[1635]: time="2026-01-22T00:33:04.049529820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 22 00:33:04.076183 containerd[1635]: time="2026-01-22T00:33:04.076044131Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 22 00:33:04.120952 containerd[1635]: time="2026-01-22T00:33:04.119711102Z" level=info msg="Container dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:33:04.161695 containerd[1635]: time="2026-01-22T00:33:04.161254972Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61\"" Jan 22 00:33:04.165750 containerd[1635]: time="2026-01-22T00:33:04.163961894Z" level=info msg="StartContainer for \"dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61\"" Jan 22 00:33:04.171701 containerd[1635]: time="2026-01-22T00:33:04.170613681Z" level=info msg="connecting to shim dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61" address="unix:///run/containerd/s/39356ce56dbcdb71eb22ee925c9517c9e05605054c892197b46c1cd6d35d935c" protocol=ttrpc version=3 Jan 22 00:33:04.283485 systemd[1]: Started cri-containerd-dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61.scope - libcontainer container dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61. Jan 22 00:33:04.468000 audit: BPF prog-id=167 op=LOAD Jan 22 00:33:04.478183 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 22 00:33:04.478459 kernel: audit: type=1334 audit(1769041984.468:564): prog-id=167 op=LOAD Jan 22 00:33:04.488163 kernel: audit: type=1300 audit(1769041984.468:564): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit[3786]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.567167 kernel: audit: type=1327 audit(1769041984.468:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.567334 kernel: audit: type=1334 audit(1769041984.468:565): prog-id=168 op=LOAD Jan 22 00:33:04.468000 audit: BPF prog-id=168 op=LOAD Jan 22 00:33:04.468000 audit[3786]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.650085 kernel: audit: type=1300 audit(1769041984.468:565): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.656445 kernel: audit: type=1327 audit(1769041984.468:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.709331 kernel: audit: type=1334 audit(1769041984.468:566): prog-id=168 op=UNLOAD Jan 22 00:33:04.468000 audit: BPF prog-id=168 op=UNLOAD Jan 22 00:33:04.715585 kernel: audit: type=1300 audit(1769041984.468:566): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit[3786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.800204 kernel: audit: type=1327 audit(1769041984.468:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.468000 audit: BPF prog-id=167 op=UNLOAD Jan 22 00:33:04.468000 audit[3786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.468000 audit: BPF prog-id=169 op=LOAD Jan 22 00:33:04.814038 kernel: audit: type=1334 audit(1769041984.468:567): prog-id=167 op=UNLOAD Jan 22 00:33:04.468000 audit[3786]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3494 pid=3786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663937643063306631323764353863373031666365393637373431 Jan 22 00:33:04.856264 containerd[1635]: time="2026-01-22T00:33:04.855699833Z" level=info msg="StartContainer for \"dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61\" returns successfully" Jan 22 00:33:04.984438 kubelet[2961]: E0122 00:33:04.984118 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:05.886000 kubelet[2961]: E0122 00:33:05.882783 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:06.897511 kubelet[2961]: E0122 00:33:06.897162 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:06.988316 kubelet[2961]: E0122 00:33:06.988108 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:08.571517 systemd[1]: cri-containerd-dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61.scope: Deactivated successfully. Jan 22 00:33:08.573577 systemd[1]: cri-containerd-dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61.scope: Consumed 2.362s CPU time, 179.1M memory peak, 3M read from disk, 171.3M written to disk. Jan 22 00:33:08.576000 audit: BPF prog-id=169 op=UNLOAD Jan 22 00:33:08.595780 containerd[1635]: time="2026-01-22T00:33:08.595452891Z" level=info msg="received container exit event container_id:\"dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61\" id:\"dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61\" pid:3800 exited_at:{seconds:1769041988 nanos:591791279}" Jan 22 00:33:08.635203 kubelet[2961]: I0122 00:33:08.634703 2961 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 22 00:33:08.796487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcf97d0c0f127d58c701fce967741e5c043e4f658b76d2106700d61fb98f9b61-rootfs.mount: Deactivated successfully. Jan 22 00:33:08.829569 systemd[1]: Created slice kubepods-besteffort-pod6839bde6_4689_4cd8_9f1c_2a5a8b19cdc2.slice - libcontainer container kubepods-besteffort-pod6839bde6_4689_4cd8_9f1c_2a5a8b19cdc2.slice. Jan 22 00:33:08.841659 kubelet[2961]: I0122 00:33:08.841507 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2-config\") pod \"goldmane-7c778bb748-956jd\" (UID: \"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2\") " pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:08.841659 kubelet[2961]: I0122 00:33:08.841633 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-956jd\" (UID: \"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2\") " pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:08.850198 kubelet[2961]: I0122 00:33:08.841668 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2-goldmane-key-pair\") pod \"goldmane-7c778bb748-956jd\" (UID: \"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2\") " pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:08.850198 kubelet[2961]: I0122 00:33:08.841698 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sg2\" (UniqueName: \"kubernetes.io/projected/6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2-kube-api-access-f4sg2\") pod \"goldmane-7c778bb748-956jd\" (UID: \"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2\") " pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:08.974503 systemd[1]: Created slice kubepods-burstable-pod2cb94888_9f16_48f2_8fc7_64c6889ef0fc.slice - libcontainer container kubepods-burstable-pod2cb94888_9f16_48f2_8fc7_64c6889ef0fc.slice. Jan 22 00:33:09.079996 systemd[1]: Created slice kubepods-besteffort-poda8109f84_107e_4926_bb88_cd99083f8125.slice - libcontainer container kubepods-besteffort-poda8109f84_107e_4926_bb88_cd99083f8125.slice. Jan 22 00:33:09.085661 kubelet[2961]: I0122 00:33:09.085424 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8109f84-107e-4926-bb88-cd99083f8125-calico-apiserver-certs\") pod \"calico-apiserver-567c775cb4-2tqd7\" (UID: \"a8109f84-107e-4926-bb88-cd99083f8125\") " pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:09.086209 kubelet[2961]: I0122 00:33:09.085675 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdws\" (UniqueName: \"kubernetes.io/projected/cc699319-6548-46e3-b846-fb40b8bdda3a-kube-api-access-skdws\") pod \"calico-apiserver-79ff4d8844-xf4gm\" (UID: \"cc699319-6548-46e3-b846-fb40b8bdda3a\") " pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:09.086209 kubelet[2961]: I0122 00:33:09.085733 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb94888-9f16-48f2-8fc7-64c6889ef0fc-config-volume\") pod \"coredns-66bc5c9577-zqk9r\" (UID: \"2cb94888-9f16-48f2-8fc7-64c6889ef0fc\") " pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:09.087488 kubelet[2961]: E0122 00:33:09.086672 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:09.090745 kubelet[2961]: I0122 00:33:09.089294 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7xg\" (UniqueName: \"kubernetes.io/projected/bbc09f02-5803-4a1d-8b41-1e543cceb488-kube-api-access-sn7xg\") pod \"whisker-5df8f6b8cf-s2hhm\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:09.090745 kubelet[2961]: I0122 00:33:09.089552 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-ca-bundle\") pod \"whisker-5df8f6b8cf-s2hhm\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:09.091715 kubelet[2961]: I0122 00:33:09.091602 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dfs\" (UniqueName: \"kubernetes.io/projected/a8109f84-107e-4926-bb88-cd99083f8125-kube-api-access-79dfs\") pod \"calico-apiserver-567c775cb4-2tqd7\" (UID: \"a8109f84-107e-4926-bb88-cd99083f8125\") " pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:09.092921 kubelet[2961]: I0122 00:33:09.091750 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc699319-6548-46e3-b846-fb40b8bdda3a-calico-apiserver-certs\") pod \"calico-apiserver-79ff4d8844-xf4gm\" (UID: \"cc699319-6548-46e3-b846-fb40b8bdda3a\") " pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:09.092921 kubelet[2961]: I0122 00:33:09.092141 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmqm\" (UniqueName: \"kubernetes.io/projected/2cb94888-9f16-48f2-8fc7-64c6889ef0fc-kube-api-access-hdmqm\") pod \"coredns-66bc5c9577-zqk9r\" (UID: \"2cb94888-9f16-48f2-8fc7-64c6889ef0fc\") " pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:09.092921 kubelet[2961]: I0122 00:33:09.092177 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-backend-key-pair\") pod \"whisker-5df8f6b8cf-s2hhm\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:09.111096 containerd[1635]: time="2026-01-22T00:33:09.111046342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 22 00:33:09.114997 systemd[1]: Created slice kubepods-besteffort-podcc699319_6548_46e3_b846_fb40b8bdda3a.slice - libcontainer container kubepods-besteffort-podcc699319_6548_46e3_b846_fb40b8bdda3a.slice. Jan 22 00:33:09.150002 systemd[1]: Created slice kubepods-besteffort-podbbc09f02_5803_4a1d_8b41_1e543cceb488.slice - libcontainer container kubepods-besteffort-podbbc09f02_5803_4a1d_8b41_1e543cceb488.slice. Jan 22 00:33:09.153636 containerd[1635]: time="2026-01-22T00:33:09.153054279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:09.209509 kubelet[2961]: I0122 00:33:09.209235 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1d20bb6-82c3-4af1-9823-e27799a9a91a-calico-apiserver-certs\") pod \"calico-apiserver-79ff4d8844-vkmvl\" (UID: \"e1d20bb6-82c3-4af1-9823-e27799a9a91a\") " pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:09.210542 kubelet[2961]: I0122 00:33:09.210191 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqjz\" (UniqueName: \"kubernetes.io/projected/e1d20bb6-82c3-4af1-9823-e27799a9a91a-kube-api-access-7hqjz\") pod \"calico-apiserver-79ff4d8844-vkmvl\" (UID: \"e1d20bb6-82c3-4af1-9823-e27799a9a91a\") " pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:09.210542 kubelet[2961]: I0122 00:33:09.210306 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cddc50d2-bbfe-4bdb-8697-ec1251db07b4-tigera-ca-bundle\") pod \"calico-kube-controllers-6fd67fc48-sbqdz\" (UID: \"cddc50d2-bbfe-4bdb-8697-ec1251db07b4\") " pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:09.210542 kubelet[2961]: I0122 00:33:09.210337 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnp94\" (UniqueName: \"kubernetes.io/projected/cddc50d2-bbfe-4bdb-8697-ec1251db07b4-kube-api-access-bnp94\") pod \"calico-kube-controllers-6fd67fc48-sbqdz\" (UID: \"cddc50d2-bbfe-4bdb-8697-ec1251db07b4\") " pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:09.210689 kubelet[2961]: I0122 00:33:09.210567 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc2a2d5-dab7-482f-b368-90c3db40ee93-config-volume\") pod \"coredns-66bc5c9577-zlbdn\" (UID: \"1fc2a2d5-dab7-482f-b368-90c3db40ee93\") " pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:09.210689 kubelet[2961]: I0122 00:33:09.210653 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghf2g\" (UniqueName: \"kubernetes.io/projected/1fc2a2d5-dab7-482f-b368-90c3db40ee93-kube-api-access-ghf2g\") pod \"coredns-66bc5c9577-zlbdn\" (UID: \"1fc2a2d5-dab7-482f-b368-90c3db40ee93\") " pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:09.219733 systemd[1]: Created slice kubepods-besteffort-pode1d20bb6_82c3_4af1_9823_e27799a9a91a.slice - libcontainer container kubepods-besteffort-pode1d20bb6_82c3_4af1_9823_e27799a9a91a.slice. Jan 22 00:33:09.237720 systemd[1]: Created slice kubepods-besteffort-podcddc50d2_bbfe_4bdb_8697_ec1251db07b4.slice - libcontainer container kubepods-besteffort-podcddc50d2_bbfe_4bdb_8697_ec1251db07b4.slice. Jan 22 00:33:09.338003 systemd[1]: Created slice kubepods-besteffort-podd3f33826_c9a7_4e28_a985_814cedd1e52b.slice - libcontainer container kubepods-besteffort-podd3f33826_c9a7_4e28_a985_814cedd1e52b.slice. Jan 22 00:33:09.399309 kubelet[2961]: E0122 00:33:09.399196 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:09.409928 systemd[1]: Created slice kubepods-burstable-pod1fc2a2d5_dab7_482f_b368_90c3db40ee93.slice - libcontainer container kubepods-burstable-pod1fc2a2d5_dab7_482f_b368_90c3db40ee93.slice. Jan 22 00:33:09.416245 containerd[1635]: time="2026-01-22T00:33:09.415662955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:09.440254 containerd[1635]: time="2026-01-22T00:33:09.438324337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:09.549588 containerd[1635]: time="2026-01-22T00:33:09.549313395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:09.641547 containerd[1635]: time="2026-01-22T00:33:09.641093932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:09.676203 containerd[1635]: time="2026-01-22T00:33:09.641333476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:09.676203 containerd[1635]: time="2026-01-22T00:33:09.651082842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:09.749329 containerd[1635]: time="2026-01-22T00:33:09.749269815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:09.955223 kubelet[2961]: E0122 00:33:09.950563 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:10.011658 containerd[1635]: time="2026-01-22T00:33:10.011137666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:11.112020 containerd[1635]: time="2026-01-22T00:33:11.111181267Z" level=error msg="Failed to destroy network for sandbox \"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.130455 systemd[1]: run-netns-cni\x2d5d9ced73\x2d8fd1\x2dad27\x2d17f7\x2dadb771b28158.mount: Deactivated successfully. Jan 22 00:33:11.185509 containerd[1635]: time="2026-01-22T00:33:11.185111502Z" level=error msg="Failed to destroy network for sandbox \"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.210999 systemd[1]: run-netns-cni\x2d7f6ab45a\x2df161\x2dd6d3\x2d59da\x2d0f6fa2aa0f53.mount: Deactivated successfully. Jan 22 00:33:11.225536 containerd[1635]: time="2026-01-22T00:33:11.222663612Z" level=error msg="Failed to destroy network for sandbox \"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.234929 systemd[1]: run-netns-cni\x2d1e8b64e2\x2d2b51\x2d00a7\x2d4a4a\x2dbf77d76bc38c.mount: Deactivated successfully. Jan 22 00:33:11.385554 containerd[1635]: time="2026-01-22T00:33:11.375565625Z" level=error msg="Failed to destroy network for sandbox \"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.383282 systemd[1]: run-netns-cni\x2dda5a1fd4\x2d850b\x2d2162\x2ddbd9\x2df78b4564316d.mount: Deactivated successfully. Jan 22 00:33:11.388639 containerd[1635]: time="2026-01-22T00:33:11.387449542Z" level=error msg="Failed to destroy network for sandbox \"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.440626 containerd[1635]: time="2026-01-22T00:33:11.429517280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.443120 containerd[1635]: time="2026-01-22T00:33:11.432070688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.443120 containerd[1635]: time="2026-01-22T00:33:11.432083512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.443120 containerd[1635]: time="2026-01-22T00:33:11.435602323Z" level=error msg="Failed to destroy network for sandbox \"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.443120 containerd[1635]: time="2026-01-22T00:33:11.439526140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.446681 containerd[1635]: time="2026-01-22T00:33:11.432054698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.541792 kubelet[2961]: E0122 00:33:11.536255 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.541792 kubelet[2961]: E0122 00:33:11.538438 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.541792 kubelet[2961]: E0122 00:33:11.543044 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:11.541792 kubelet[2961]: E0122 00:33:11.543234 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:11.559525 kubelet[2961]: E0122 00:33:11.544690 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29140d6bd7ec80bfcaa5d2a11835adfb1a250cba548353c3a0dbcedbd9085b54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zqk9r" podUID="2cb94888-9f16-48f2-8fc7-64c6889ef0fc" Jan 22 00:33:11.559525 kubelet[2961]: E0122 00:33:11.545167 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.559525 kubelet[2961]: E0122 00:33:11.545197 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:11.560333 containerd[1635]: time="2026-01-22T00:33:11.553989692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.560975 kubelet[2961]: E0122 00:33:11.545218 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:11.560975 kubelet[2961]: E0122 00:33:11.545423 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b30739d4ad894cbfc6cca1a30b979f644ab9fbb8441d3c9afca0947d65c5a02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:33:11.560975 kubelet[2961]: E0122 00:33:11.545561 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.561452 kubelet[2961]: E0122 00:33:11.545670 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:11.561452 kubelet[2961]: E0122 00:33:11.545695 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:11.561452 kubelet[2961]: E0122 00:33:11.545731 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a02531e564d7b44e4b496b199e56e11d842ae41eed398787aeace81206cd9c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:33:11.562126 kubelet[2961]: E0122 00:33:11.545757 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:11.562126 kubelet[2961]: E0122 00:33:11.545776 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:11.562126 kubelet[2961]: E0122 00:33:11.546240 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c223dfc659db0036b0b2611a223a6f8e4117affc2b07251d9db41b71292ff441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:33:11.562483 kubelet[2961]: E0122 00:33:11.541785 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.562483 kubelet[2961]: E0122 00:33:11.549475 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:11.562483 kubelet[2961]: E0122 00:33:11.549505 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:11.562631 kubelet[2961]: E0122 00:33:11.553179 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33c8d75d4dc1744ea98c0217982171ffe23e98eab99d016b1f6817160f04e662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df8f6b8cf-s2hhm" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" Jan 22 00:33:11.562631 kubelet[2961]: E0122 00:33:11.558167 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.562631 kubelet[2961]: E0122 00:33:11.558213 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:11.564070 kubelet[2961]: E0122 00:33:11.558239 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:11.564070 kubelet[2961]: E0122 00:33:11.558303 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f76c675159e3de017f309fc70b60002c3e97b84b2a6a6d9a825a0b85ae4e886d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlbdn" podUID="1fc2a2d5-dab7-482f-b368-90c3db40ee93" Jan 22 00:33:11.564305 containerd[1635]: time="2026-01-22T00:33:11.563226381Z" level=error msg="Failed to destroy network for sandbox \"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.606092 containerd[1635]: time="2026-01-22T00:33:11.602994851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.606503 kubelet[2961]: E0122 00:33:11.604020 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.613429 kubelet[2961]: E0122 00:33:11.604174 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:11.613429 kubelet[2961]: E0122 00:33:11.613148 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:11.619904 kubelet[2961]: E0122 00:33:11.614179 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d49dec1c20e069d2357154e7c47b829ed6d8dbca8020cb7c0bb75ba44e02dfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:33:11.641586 containerd[1635]: time="2026-01-22T00:33:11.639704549Z" level=error msg="Failed to destroy network for sandbox \"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.661483 containerd[1635]: time="2026-01-22T00:33:11.661230478Z" level=error msg="Failed to destroy network for sandbox \"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.683727 containerd[1635]: time="2026-01-22T00:33:11.681294706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.691232 kubelet[2961]: E0122 00:33:11.690595 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.699666 kubelet[2961]: E0122 00:33:11.699038 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:11.699666 kubelet[2961]: E0122 00:33:11.699093 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:11.699666 kubelet[2961]: E0122 00:33:11.699175 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f71d8d20f9781297ac6b8ba64ad259df7436721f5f578efdb396256f702e1141\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:11.702644 containerd[1635]: time="2026-01-22T00:33:11.701166516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.703191 kubelet[2961]: E0122 00:33:11.702643 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:11.703191 kubelet[2961]: E0122 00:33:11.702747 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:11.704688 kubelet[2961]: E0122 00:33:11.702779 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:11.707516 kubelet[2961]: E0122 00:33:11.707475 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b774fe813ba2a37f3e7c4f314653292c9bba3fb9e1d7e896e34ee9ddda265f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:33:11.842533 systemd[1]: run-netns-cni\x2de33c2a82\x2dbbbd\x2deb6c\x2d9794\x2d218751a0c76d.mount: Deactivated successfully. Jan 22 00:33:11.842779 systemd[1]: run-netns-cni\x2d1893d8e2\x2d9a3c\x2dff31\x2d98da\x2d64dcdd653cfd.mount: Deactivated successfully. Jan 22 00:33:11.843039 systemd[1]: run-netns-cni\x2df2ec854e\x2d958c\x2d7308\x2d5943\x2d4b101681b7cc.mount: Deactivated successfully. Jan 22 00:33:11.843135 systemd[1]: run-netns-cni\x2da9acbc8d\x2d64ff\x2d2605\x2d57d9\x2dda559abf5b04.mount: Deactivated successfully. Jan 22 00:33:11.843236 systemd[1]: run-netns-cni\x2d89e9a47a\x2d65f1\x2d79e5\x2d889e\x2d2e0f3958c4f1.mount: Deactivated successfully. Jan 22 00:33:18.897181 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). Jan 22 00:33:18.986785 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 22 00:33:18.990517 kernel: audit: type=1130 audit(1769041998.893:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:18.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:19.695300 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:19.689000 audit[4141]: USER_ACCT pid=4141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:19.723543 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:19.755203 kernel: audit: type=1101 audit(1769041999.689:571): pid=4141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:19.700000 audit[4141]: CRED_ACQ pid=4141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:19.831026 kernel: audit: type=1103 audit(1769041999.700:572): pid=4141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:19.831186 kernel: audit: type=1006 audit(1769041999.700:573): pid=4141 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 22 00:33:19.700000 audit[4141]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff45076330 a2=3 a3=0 items=0 ppid=1 pid=4141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:19.861566 systemd-logind[1609]: New session 10 of user core. Jan 22 00:33:19.700000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:19.927317 kernel: audit: type=1300 audit(1769041999.700:573): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff45076330 a2=3 a3=0 items=0 ppid=1 pid=4141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:19.927545 kernel: audit: type=1327 audit(1769041999.700:573): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:19.930539 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 22 00:33:19.967000 audit[4141]: USER_START pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.009987 kernel: audit: type=1105 audit(1769041999.967:574): pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:19.989000 audit[4144]: CRED_ACQ pid=4144 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.078718 kernel: audit: type=1103 audit(1769041999.989:575): pid=4144 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.725557 sshd[4144]: Connection closed by 10.0.0.1 port 34788 Jan 22 00:33:20.727219 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:20.739000 audit[4141]: USER_END pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.763109 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Jan 22 00:33:20.767760 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:34788.service: Deactivated successfully. Jan 22 00:33:20.774777 systemd[1]: session-10.scope: Deactivated successfully. Jan 22 00:33:20.797960 kernel: audit: type=1106 audit(1769042000.739:576): pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.796131 systemd-logind[1609]: Removed session 10. Jan 22 00:33:20.747000 audit[4141]: CRED_DISP pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:20.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.25:22-10.0.0.1:34788 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:20.841122 kernel: audit: type=1104 audit(1769042000.747:577): pid=4141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:22.006790 containerd[1635]: time="2026-01-22T00:33:22.006296509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:22.469564 containerd[1635]: time="2026-01-22T00:33:22.468625532Z" level=error msg="Failed to destroy network for sandbox \"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:22.481625 systemd[1]: run-netns-cni\x2d792dee66\x2db3ac\x2d2fca\x2df492\x2d68a040b0180b.mount: Deactivated successfully. Jan 22 00:33:22.538713 containerd[1635]: time="2026-01-22T00:33:22.534624161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:22.560074 kubelet[2961]: E0122 00:33:22.559567 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:22.567153 kubelet[2961]: E0122 00:33:22.566202 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:22.567153 kubelet[2961]: E0122 00:33:22.566507 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:22.567153 kubelet[2961]: E0122 00:33:22.566590 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edfc8e557767c1814a3dda0bb10544ac45efa0141bd15f389d7c0d302cdc8dde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:33:23.008604 kubelet[2961]: E0122 00:33:23.005475 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:23.009090 containerd[1635]: time="2026-01-22T00:33:23.006299581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:23.020909 kubelet[2961]: E0122 00:33:23.018258 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:23.075635 containerd[1635]: time="2026-01-22T00:33:23.075138482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:23.966190 containerd[1635]: time="2026-01-22T00:33:23.964585022Z" level=error msg="Failed to destroy network for sandbox \"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:23.977551 systemd[1]: run-netns-cni\x2da9ef76be\x2dd43d\x2dccf4\x2d8e75\x2dccb0d6bbd557.mount: Deactivated successfully. Jan 22 00:33:23.995486 containerd[1635]: time="2026-01-22T00:33:23.979046940Z" level=error msg="Failed to destroy network for sandbox \"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:23.984785 systemd[1]: run-netns-cni\x2d3d841810\x2d0aa1\x2d446a\x2d7e73\x2d61d0f22d1917.mount: Deactivated successfully. Jan 22 00:33:24.020277 containerd[1635]: time="2026-01-22T00:33:24.019967357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:24.032120 kubelet[2961]: E0122 00:33:24.031707 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:24.033037 kubelet[2961]: E0122 00:33:24.032150 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:24.033037 kubelet[2961]: E0122 00:33:24.032184 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:24.033037 kubelet[2961]: E0122 00:33:24.032256 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781438fdf27b081f5668e622411ad6b2154c70d4d8e12fee9644d8e9e6beaeb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zqk9r" podUID="2cb94888-9f16-48f2-8fc7-64c6889ef0fc" Jan 22 00:33:24.036699 containerd[1635]: time="2026-01-22T00:33:24.034091263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:24.037187 kubelet[2961]: E0122 00:33:24.036977 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:24.037187 kubelet[2961]: E0122 00:33:24.037026 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:24.037187 kubelet[2961]: E0122 00:33:24.037051 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:24.037324 kubelet[2961]: E0122 00:33:24.037115 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"150d01c3b9bd215072c605c67a61793f94c6e1c32d0c8edf1e7a7ba7063d10b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlbdn" podUID="1fc2a2d5-dab7-482f-b368-90c3db40ee93" Jan 22 00:33:25.269124 containerd[1635]: time="2026-01-22T00:33:25.268227630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:25.305570 containerd[1635]: time="2026-01-22T00:33:25.298706125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:25.933615 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:33:25.934459 kernel: audit: type=1130 audit(1769042005.895:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:35640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:25.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:35640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:25.896330 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:35640.service - OpenSSH per-connection server daemon (10.0.0.1:35640). Jan 22 00:33:26.095952 containerd[1635]: time="2026-01-22T00:33:26.069766745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:26.131622 containerd[1635]: time="2026-01-22T00:33:26.131257623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:26.210019 containerd[1635]: time="2026-01-22T00:33:26.167465670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:26.571679 containerd[1635]: time="2026-01-22T00:33:26.566324098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:26.764137 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 35640 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:26.762000 audit[4321]: USER_ACCT pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:26.776745 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:26.835751 kernel: audit: type=1101 audit(1769042006.762:580): pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:26.774000 audit[4321]: CRED_ACQ pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:26.923636 kernel: audit: type=1103 audit(1769042006.774:581): pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:26.869064 systemd-logind[1609]: New session 11 of user core. Jan 22 00:33:26.987733 kernel: audit: type=1006 audit(1769042006.775:582): pid=4321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 22 00:33:26.982571 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 22 00:33:26.775000 audit[4321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd37ef0080 a2=3 a3=0 items=0 ppid=1 pid=4321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:27.043293 kernel: audit: type=1300 audit(1769042006.775:582): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd37ef0080 a2=3 a3=0 items=0 ppid=1 pid=4321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:27.059192 kernel: audit: type=1327 audit(1769042006.775:582): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:26.775000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:27.100000 audit[4321]: USER_START pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:27.141068 kernel: audit: type=1105 audit(1769042007.100:583): pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:27.161694 containerd[1635]: time="2026-01-22T00:33:27.158739518Z" level=error msg="Failed to destroy network for sandbox \"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.180000 audit[4383]: CRED_ACQ pid=4383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:27.213505 containerd[1635]: time="2026-01-22T00:33:27.184286535Z" level=error msg="Failed to destroy network for sandbox \"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.268263 kernel: audit: type=1103 audit(1769042007.180:584): pid=4383 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:27.270686 systemd[1]: run-netns-cni\x2dd3f50857\x2d3d99\x2dd876\x2d9d8d\x2d0c149dff77af.mount: Deactivated successfully. Jan 22 00:33:27.590089 systemd[1]: run-netns-cni\x2d6a2ea4e2\x2d316d\x2d4cb7\x2d52ff\x2d83409baa159c.mount: Deactivated successfully. Jan 22 00:33:27.600544 containerd[1635]: time="2026-01-22T00:33:27.599093994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.653767 kubelet[2961]: E0122 00:33:27.618794 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.653767 kubelet[2961]: E0122 00:33:27.619209 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:27.653767 kubelet[2961]: E0122 00:33:27.619242 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:27.655025 kubelet[2961]: E0122 00:33:27.619309 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c54ff3d84449d2b29cbfb1605ed90c64b2774c7599a03338349d37d7e9f5a83d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df8f6b8cf-s2hhm" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" Jan 22 00:33:27.800971 containerd[1635]: time="2026-01-22T00:33:27.746780069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.814104 kubelet[2961]: E0122 00:33:27.812650 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:27.814558 kubelet[2961]: E0122 00:33:27.814516 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:27.814978 kubelet[2961]: E0122 00:33:27.814944 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:27.815171 kubelet[2961]: E0122 00:33:27.815131 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"162817a07ca75389299cf86d6140a28cbf3008eeeb8104f79bc14e325bc08fb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:33:28.327999 sshd[4383]: Connection closed by 10.0.0.1 port 35640 Jan 22 00:33:28.330556 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:28.378128 kernel: audit: type=1106 audit(1769042008.339:585): pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:28.339000 audit[4321]: USER_END pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:28.393265 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:35640.service: Deactivated successfully. Jan 22 00:33:28.403312 systemd[1]: session-11.scope: Deactivated successfully. Jan 22 00:33:28.411295 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Jan 22 00:33:28.339000 audit[4321]: CRED_DISP pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:28.460464 kernel: audit: type=1104 audit(1769042008.339:586): pid=4321 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:28.432074 systemd-logind[1609]: Removed session 11. Jan 22 00:33:28.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.25:22-10.0.0.1:35640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:29.005715 containerd[1635]: time="2026-01-22T00:33:29.005569682Z" level=error msg="Failed to destroy network for sandbox \"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.010755 containerd[1635]: time="2026-01-22T00:33:29.007905680Z" level=error msg="Failed to destroy network for sandbox \"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.015631 systemd[1]: run-netns-cni\x2d99e5587d\x2d79b5\x2d6ad3\x2de75a\x2de115a2e90b48.mount: Deactivated successfully. Jan 22 00:33:29.016521 systemd[1]: run-netns-cni\x2d72f2ab4c\x2dd274\x2d5a84\x2d3923\x2da0af290f0efb.mount: Deactivated successfully. Jan 22 00:33:29.065017 containerd[1635]: time="2026-01-22T00:33:29.060565946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.065349 kubelet[2961]: E0122 00:33:29.061035 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.065349 kubelet[2961]: E0122 00:33:29.061104 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:29.065349 kubelet[2961]: E0122 00:33:29.061135 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:29.069334 kubelet[2961]: E0122 00:33:29.061350 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be3067a5e608e3cd2f84f6fe551d4ab9f1cc04aff4f53fa58872f8e1689cc35d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:29.078757 containerd[1635]: time="2026-01-22T00:33:29.073779108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.083645 kubelet[2961]: E0122 00:33:29.078483 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.083645 kubelet[2961]: E0122 00:33:29.078565 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:29.083645 kubelet[2961]: E0122 00:33:29.078594 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:29.084083 kubelet[2961]: E0122 00:33:29.078670 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"568d9540da8c5e192ea774d7c8a5add2ab87775f53da9a558b96cff9f97b4ded\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:33:29.135279 containerd[1635]: time="2026-01-22T00:33:29.130328511Z" level=error msg="Failed to destroy network for sandbox \"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.172219 systemd[1]: run-netns-cni\x2d4ea57772\x2d780e\x2d4ac2\x2dc30e\x2d454b7f815e74.mount: Deactivated successfully. Jan 22 00:33:29.227980 containerd[1635]: time="2026-01-22T00:33:29.227741660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.233209 kubelet[2961]: E0122 00:33:29.229057 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.233209 kubelet[2961]: E0122 00:33:29.229204 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:29.233209 kubelet[2961]: E0122 00:33:29.229233 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:29.233572 kubelet[2961]: E0122 00:33:29.229301 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eec0bd86d482df6b564c064a8e69be4d03b37db62028b93d91527c07a88708c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:33:29.238248 containerd[1635]: time="2026-01-22T00:33:29.238118005Z" level=error msg="Failed to destroy network for sandbox \"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.249293 systemd[1]: run-netns-cni\x2d96638fe6\x2daf02\x2d32ca\x2d31c9\x2d22b6440f6a57.mount: Deactivated successfully. Jan 22 00:33:29.275194 containerd[1635]: time="2026-01-22T00:33:29.275036374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.281623 kubelet[2961]: E0122 00:33:29.281571 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:29.282929 kubelet[2961]: E0122 00:33:29.282765 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:29.283038 kubelet[2961]: E0122 00:33:29.283014 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:29.283195 kubelet[2961]: E0122 00:33:29.283155 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"791946872da8d730428f9016a496883f4d9651b8c3f8ad65ec04150616d291c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:33:33.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:35648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:33.365145 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:35648.service - OpenSSH per-connection server daemon (10.0.0.1:35648). Jan 22 00:33:33.376788 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:33:33.376997 kernel: audit: type=1130 audit(1769042013.364:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:35648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:33.675000 audit[4482]: USER_ACCT pid=4482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.742780 kernel: audit: type=1101 audit(1769042013.675:589): pid=4482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.743007 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 35648 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:33.679686 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:33.710261 systemd-logind[1609]: New session 12 of user core. Jan 22 00:33:33.677000 audit[4482]: CRED_ACQ pid=4482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.813647 kernel: audit: type=1103 audit(1769042013.677:590): pid=4482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.813770 kernel: audit: type=1006 audit(1769042013.677:591): pid=4482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 22 00:33:33.798017 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 22 00:33:33.677000 audit[4482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a4b7b30 a2=3 a3=0 items=0 ppid=1 pid=4482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:33.861085 kernel: audit: type=1300 audit(1769042013.677:591): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a4b7b30 a2=3 a3=0 items=0 ppid=1 pid=4482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:33.677000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:33.881152 kernel: audit: type=1327 audit(1769042013.677:591): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:33.881000 audit[4482]: USER_START pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.942234 kernel: audit: type=1105 audit(1769042013.881:592): pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.985291 kernel: audit: type=1103 audit(1769042013.892:593): pid=4485 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:33.892000 audit[4485]: CRED_ACQ pid=4485 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:34.699300 sshd[4485]: Connection closed by 10.0.0.1 port 35648 Jan 22 00:33:34.699074 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:34.702000 audit[4482]: USER_END pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:34.728695 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:35648.service: Deactivated successfully. Jan 22 00:33:34.739507 systemd[1]: session-12.scope: Deactivated successfully. Jan 22 00:33:34.755108 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Jan 22 00:33:34.764701 systemd-logind[1609]: Removed session 12. Jan 22 00:33:34.802085 kernel: audit: type=1106 audit(1769042014.702:594): pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:34.802348 kernel: audit: type=1104 audit(1769042014.702:595): pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:34.702000 audit[4482]: CRED_DISP pid=4482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:34.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.25:22-10.0.0.1:35648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:36.038502 containerd[1635]: time="2026-01-22T00:33:36.037720859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:36.678931 containerd[1635]: time="2026-01-22T00:33:36.678711076Z" level=error msg="Failed to destroy network for sandbox \"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:36.687711 systemd[1]: run-netns-cni\x2d569e8733\x2d0f88\x2dc4e3\x2d5d4a\x2dbc5a7c0a6484.mount: Deactivated successfully. Jan 22 00:33:36.763735 containerd[1635]: time="2026-01-22T00:33:36.761937607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:36.764243 kubelet[2961]: E0122 00:33:36.763100 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:36.764243 kubelet[2961]: E0122 00:33:36.763183 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:36.764243 kubelet[2961]: E0122 00:33:36.763212 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:36.771597 kubelet[2961]: E0122 00:33:36.763283 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2b8f568ed1f3786fb08603e6aaa554ab2e4053b3d91a17dcb9ffd2d6ac1c7ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:33:38.025966 kubelet[2961]: E0122 00:33:38.024220 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:38.026731 containerd[1635]: time="2026-01-22T00:33:38.026176968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:38.030687 kubelet[2961]: E0122 00:33:38.030013 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:38.060998 containerd[1635]: time="2026-01-22T00:33:38.046761709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:38.919358 containerd[1635]: time="2026-01-22T00:33:38.897364325Z" level=error msg="Failed to destroy network for sandbox \"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:38.924501 systemd[1]: run-netns-cni\x2d4c552246\x2d1532\x2dc24d\x2d60fb\x2dc88bd84dbb5f.mount: Deactivated successfully. Jan 22 00:33:38.953363 containerd[1635]: time="2026-01-22T00:33:38.949310426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:39.057095 kubelet[2961]: E0122 00:33:38.950146 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:39.057095 kubelet[2961]: E0122 00:33:38.950241 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:39.057095 kubelet[2961]: E0122 00:33:38.950277 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:39.080537 kubelet[2961]: E0122 00:33:38.950780 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d7290596309b62652357bd925cb471c36dd26396962197360ee4d9b9e2d4124\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlbdn" podUID="1fc2a2d5-dab7-482f-b368-90c3db40ee93" Jan 22 00:33:39.291647 containerd[1635]: time="2026-01-22T00:33:39.290106291Z" level=error msg="Failed to destroy network for sandbox \"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:39.308527 systemd[1]: run-netns-cni\x2d11b2f956\x2d6b67\x2d36d7\x2d74d4\x2deaa5ad296009.mount: Deactivated successfully. Jan 22 00:33:39.468483 containerd[1635]: time="2026-01-22T00:33:39.468003178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:39.469126 kubelet[2961]: E0122 00:33:39.468987 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:39.469126 kubelet[2961]: E0122 00:33:39.469064 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:39.469126 kubelet[2961]: E0122 00:33:39.469099 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:39.469617 kubelet[2961]: E0122 00:33:39.469171 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8331d6e5009c488af30ba87c685a3db22d5abcf7e4344b802b91fafc2cef0d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zqk9r" podUID="2cb94888-9f16-48f2-8fc7-64c6889ef0fc" Jan 22 00:33:39.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:52978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:39.728965 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:52978.service - OpenSSH per-connection server daemon (10.0.0.1:52978). Jan 22 00:33:39.768110 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:33:39.768279 kernel: audit: type=1130 audit(1769042019.728:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:52978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:40.430121 containerd[1635]: time="2026-01-22T00:33:40.420770656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:40.465701 containerd[1635]: time="2026-01-22T00:33:40.465554997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:40.500007 containerd[1635]: time="2026-01-22T00:33:40.498317563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:40.852000 audit[4596]: USER_ACCT pid=4596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:40.982057 kernel: audit: type=1101 audit(1769042020.852:598): pid=4596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:40.860097 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:40.988129 sshd[4596]: Accepted publickey for core from 10.0.0.1 port 52978 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:41.104004 kernel: audit: type=1103 audit(1769042020.853:599): pid=4596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:40.853000 audit[4596]: CRED_ACQ pid=4596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:41.151041 systemd-logind[1609]: New session 13 of user core. Jan 22 00:33:40.853000 audit[4596]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff37950c60 a2=3 a3=0 items=0 ppid=1 pid=4596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:41.223630 kernel: audit: type=1006 audit(1769042020.853:600): pid=4596 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 22 00:33:41.223788 kernel: audit: type=1300 audit(1769042020.853:600): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff37950c60 a2=3 a3=0 items=0 ppid=1 pid=4596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:41.224247 kernel: audit: type=1327 audit(1769042020.853:600): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:40.853000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:41.224686 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 22 00:33:41.333359 kernel: audit: type=1105 audit(1769042021.235:601): pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:41.235000 audit[4596]: USER_START pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:41.335000 audit[4638]: CRED_ACQ pid=4638 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:41.419996 kernel: audit: type=1103 audit(1769042021.335:602): pid=4638 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:42.089366 containerd[1635]: time="2026-01-22T00:33:42.085544733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:42.231977 sshd[4638]: Connection closed by 10.0.0.1 port 52978 Jan 22 00:33:42.233289 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:42.359729 kernel: audit: type=1106 audit(1769042022.257:603): pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:42.257000 audit[4596]: USER_END pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:42.270746 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:52978.service: Deactivated successfully. Jan 22 00:33:42.409351 kernel: audit: type=1104 audit(1769042022.257:604): pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:42.257000 audit[4596]: CRED_DISP pid=4596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:42.288332 systemd[1]: session-13.scope: Deactivated successfully. Jan 22 00:33:42.403931 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Jan 22 00:33:42.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.25:22-10.0.0.1:52978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:42.429779 systemd-logind[1609]: Removed session 13. Jan 22 00:33:42.539184 containerd[1635]: time="2026-01-22T00:33:42.538202068Z" level=error msg="Failed to destroy network for sandbox \"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.556640 systemd[1]: run-netns-cni\x2d7c874156\x2deb85\x2d2ba6\x2d6161\x2d37841a391853.mount: Deactivated successfully. Jan 22 00:33:42.569213 systemd[1727]: Created slice background.slice - User Background Tasks Slice. Jan 22 00:33:42.580188 systemd[1727]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 22 00:33:42.636643 containerd[1635]: time="2026-01-22T00:33:42.633374459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.653649 kubelet[2961]: E0122 00:33:42.641374 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.653649 kubelet[2961]: E0122 00:33:42.641533 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:42.653649 kubelet[2961]: E0122 00:33:42.641564 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:42.655125 kubelet[2961]: E0122 00:33:42.641632 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b92dc93a5c486acf9da0136173be468e3c48d325160bddf79ff4247cae6ef174\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:33:42.731169 systemd[1727]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 22 00:33:42.813254 containerd[1635]: time="2026-01-22T00:33:42.813187114Z" level=error msg="Failed to destroy network for sandbox \"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.832324 systemd[1]: run-netns-cni\x2d41a806b6\x2d0904\x2d3d79\x2da7f3\x2df94ad9228283.mount: Deactivated successfully. Jan 22 00:33:42.872229 containerd[1635]: time="2026-01-22T00:33:42.870626244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.872772 kubelet[2961]: E0122 00:33:42.871128 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:42.872772 kubelet[2961]: E0122 00:33:42.871194 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:42.872772 kubelet[2961]: E0122 00:33:42.871221 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:42.873151 kubelet[2961]: E0122 00:33:42.871294 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89803b37fb5eb1dc6d04998e09edd8ac38e35d4804473a7a794c943954218988\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:33:43.100059 containerd[1635]: time="2026-01-22T00:33:43.081979412Z" level=error msg="Failed to destroy network for sandbox \"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:43.119002 containerd[1635]: time="2026-01-22T00:33:43.114715526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:43.117318 systemd[1]: run-netns-cni\x2df5762fff\x2df854\x2d287d\x2dd0e9\x2df38614f99f6f.mount: Deactivated successfully. Jan 22 00:33:43.119535 kubelet[2961]: E0122 00:33:43.115611 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:43.119535 kubelet[2961]: E0122 00:33:43.115694 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:43.119535 kubelet[2961]: E0122 00:33:43.115726 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:43.119674 kubelet[2961]: E0122 00:33:43.115960 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc153fef4155c32b30c995240ea2c75fa44adce20417e5455c2dc6588c21e2a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:44.033949 containerd[1635]: time="2026-01-22T00:33:44.023732617Z" level=error msg="Failed to destroy network for sandbox \"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:44.062291 systemd[1]: run-netns-cni\x2d4027dffc\x2d246f\x2d11a4\x2dbf84\x2dd72da99d8f79.mount: Deactivated successfully. Jan 22 00:33:44.090259 containerd[1635]: time="2026-01-22T00:33:44.089188300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:44.118885 containerd[1635]: time="2026-01-22T00:33:44.117196661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:44.139969 containerd[1635]: time="2026-01-22T00:33:44.131517836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:44.141366 kubelet[2961]: E0122 00:33:44.132149 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:44.141366 kubelet[2961]: E0122 00:33:44.132223 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:44.141366 kubelet[2961]: E0122 00:33:44.132255 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:44.142161 kubelet[2961]: E0122 00:33:44.132333 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cf2163ae112aee3764d6f88a1915015b2c32b67c2b9f3f9e13aa8ab30236809\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df8f6b8cf-s2hhm" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" Jan 22 00:33:45.109540 containerd[1635]: time="2026-01-22T00:33:45.107655309Z" level=error msg="Failed to destroy network for sandbox \"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.109950 containerd[1635]: time="2026-01-22T00:33:45.109574440Z" level=error msg="Failed to destroy network for sandbox \"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.121318 systemd[1]: run-netns-cni\x2d2b796b78\x2d8ebb\x2d35ed\x2daff5\x2de60fe7c1f739.mount: Deactivated successfully. Jan 22 00:33:45.131593 systemd[1]: run-netns-cni\x2dfdf10c79\x2d3469\x2df2b8\x2da6bb\x2d152e7843ef6f.mount: Deactivated successfully. Jan 22 00:33:45.140124 containerd[1635]: time="2026-01-22T00:33:45.138747193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.151699 kubelet[2961]: E0122 00:33:45.142285 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.151699 kubelet[2961]: E0122 00:33:45.142378 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:45.151699 kubelet[2961]: E0122 00:33:45.142495 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:45.152600 containerd[1635]: time="2026-01-22T00:33:45.149924316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.152998 kubelet[2961]: E0122 00:33:45.142581 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a8b09bac22da59b3997751b218a739e6cc2b8cf992d1287cc40f6035f1bed46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:33:45.152998 kubelet[2961]: E0122 00:33:45.151203 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:45.152998 kubelet[2961]: E0122 00:33:45.151253 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:45.166679 kubelet[2961]: E0122 00:33:45.151276 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:45.166679 kubelet[2961]: E0122 00:33:45.151331 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43f9d126ec11dbfa56be0a77850595916d17c58ac12bcb973541b7a22698b269\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:33:47.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:47.287012 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:44712.service - OpenSSH per-connection server daemon (10.0.0.1:44712). Jan 22 00:33:47.333267 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:33:47.335576 kernel: audit: type=1130 audit(1769042027.286:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:47.626000 audit[4811]: USER_ACCT pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:47.630072 sshd[4811]: Accepted publickey for core from 10.0.0.1 port 44712 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:47.657154 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:47.683996 kernel: audit: type=1101 audit(1769042027.626:607): pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:47.641000 audit[4811]: CRED_ACQ pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:47.732591 kernel: audit: type=1103 audit(1769042027.641:608): pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:47.737328 systemd-logind[1609]: New session 14 of user core. Jan 22 00:33:47.655000 audit[4811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0331950 a2=3 a3=0 items=0 ppid=1 pid=4811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:47.866405 kernel: audit: type=1006 audit(1769042027.655:609): pid=4811 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 22 00:33:47.867063 kernel: audit: type=1300 audit(1769042027.655:609): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0331950 a2=3 a3=0 items=0 ppid=1 pid=4811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:47.867122 kernel: audit: type=1327 audit(1769042027.655:609): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:47.655000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:47.873707 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 22 00:33:47.901000 audit[4811]: USER_START pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.035148 kernel: audit: type=1105 audit(1769042027.901:610): pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.035306 kernel: audit: type=1103 audit(1769042027.925:611): pid=4814 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:47.925000 audit[4814]: CRED_ACQ pid=4814 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.522979 sshd[4814]: Connection closed by 10.0.0.1 port 44712 Jan 22 00:33:48.524038 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:48.525000 audit[4811]: USER_END pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.531590 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:44712.service: Deactivated successfully. Jan 22 00:33:48.569983 kernel: audit: type=1106 audit(1769042028.525:612): pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.526000 audit[4811]: CRED_DISP pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.638525 kernel: audit: type=1104 audit(1769042028.526:613): pid=4811 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:48.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.25:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:48.646603 systemd[1]: session-14.scope: Deactivated successfully. Jan 22 00:33:48.668398 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Jan 22 00:33:48.672198 systemd-logind[1609]: Removed session 14. Jan 22 00:33:51.004589 containerd[1635]: time="2026-01-22T00:33:51.003319519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:51.449394 containerd[1635]: time="2026-01-22T00:33:51.449172185Z" level=error msg="Failed to destroy network for sandbox \"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:51.459711 systemd[1]: run-netns-cni\x2d8fe15449\x2ddc0c\x2d040e\x2d2443\x2d0fc21afc514a.mount: Deactivated successfully. Jan 22 00:33:51.491113 containerd[1635]: time="2026-01-22T00:33:51.490713636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:51.491601 kubelet[2961]: E0122 00:33:51.491294 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:51.491601 kubelet[2961]: E0122 00:33:51.491367 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:51.491601 kubelet[2961]: E0122 00:33:51.491393 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:33:51.492571 kubelet[2961]: E0122 00:33:51.491551 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae5ca24dcdb3da46854c00dae6106e7452a447f18edd7ed4e59a6f4f5d298992\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:33:52.023391 kubelet[2961]: E0122 00:33:52.023335 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:52.027379 containerd[1635]: time="2026-01-22T00:33:52.027194104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:52.396090 containerd[1635]: time="2026-01-22T00:33:52.396012249Z" level=error msg="Failed to destroy network for sandbox \"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:52.405007 systemd[1]: run-netns-cni\x2dd7a2857d\x2d93b7\x2d22b1\x2d7064\x2da76ad4afd8dd.mount: Deactivated successfully. Jan 22 00:33:52.414656 containerd[1635]: time="2026-01-22T00:33:52.414424196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:52.417124 kubelet[2961]: E0122 00:33:52.417042 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:52.417216 kubelet[2961]: E0122 00:33:52.417133 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:52.417216 kubelet[2961]: E0122 00:33:52.417162 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:33:52.417300 kubelet[2961]: E0122 00:33:52.417233 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca349e555794e54781f7a83e6c2a7e8ba919c4cd7eb6563913f47fd2cf80e394\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlbdn" podUID="1fc2a2d5-dab7-482f-b368-90c3db40ee93" Jan 22 00:33:53.578749 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:44718.service - OpenSSH per-connection server daemon (10.0.0.1:44718). Jan 22 00:33:53.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.25:22-10.0.0.1:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:53.588084 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:33:53.588288 kernel: audit: type=1130 audit(1769042033.580:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.25:22-10.0.0.1:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:54.001082 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:53.999000 audit[4891]: USER_ACCT pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.029125 systemd-logind[1609]: New session 15 of user core. Jan 22 00:33:54.034549 kubelet[2961]: E0122 00:33:54.022432 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:33:54.035168 containerd[1635]: time="2026-01-22T00:33:54.009252070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:54.035168 containerd[1635]: time="2026-01-22T00:33:54.012965000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:54.035168 containerd[1635]: time="2026-01-22T00:33:54.026640314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:33:54.005967 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:54.042954 kernel: audit: type=1101 audit(1769042033.999:616): pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.004000 audit[4891]: CRED_ACQ pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.059099 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 22 00:33:54.092346 kernel: audit: type=1103 audit(1769042034.004:617): pid=4891 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.092566 kernel: audit: type=1006 audit(1769042034.004:618): pid=4891 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 22 00:33:54.004000 audit[4891]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe137520c0 a2=3 a3=0 items=0 ppid=1 pid=4891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:54.177629 kernel: audit: type=1300 audit(1769042034.004:618): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe137520c0 a2=3 a3=0 items=0 ppid=1 pid=4891 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:54.177766 kernel: audit: type=1327 audit(1769042034.004:618): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:54.004000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:54.092000 audit[4891]: USER_START pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.225057 kernel: audit: type=1105 audit(1769042034.092:619): pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.277717 kernel: audit: type=1103 audit(1769042034.104:620): pid=4921 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.104000 audit[4921]: CRED_ACQ pid=4921 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.596346 sshd[4921]: Connection closed by 10.0.0.1 port 44718 Jan 22 00:33:54.593331 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:54.604000 audit[4891]: USER_END pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.613717 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:44718.service: Deactivated successfully. Jan 22 00:33:54.622089 systemd[1]: session-15.scope: Deactivated successfully. Jan 22 00:33:54.635733 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Jan 22 00:33:54.644398 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:40662.service - OpenSSH per-connection server daemon (10.0.0.1:40662). Jan 22 00:33:54.656775 kernel: audit: type=1106 audit(1769042034.604:621): pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.604000 audit[4891]: CRED_DISP pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.668184 systemd-logind[1609]: Removed session 15. Jan 22 00:33:54.719188 kernel: audit: type=1104 audit(1769042034.604:622): pid=4891 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.25:22-10.0.0.1:44718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:54.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.25:22-10.0.0.1:40662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:54.881044 containerd[1635]: time="2026-01-22T00:33:54.876631433Z" level=error msg="Failed to destroy network for sandbox \"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:54.885541 systemd[1]: run-netns-cni\x2d297bfa33\x2d5802\x2dcb99\x2d9fe9\x2df980a0160ef6.mount: Deactivated successfully. Jan 22 00:33:54.929618 containerd[1635]: time="2026-01-22T00:33:54.929187182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:54.933118 kubelet[2961]: E0122 00:33:54.932694 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:54.936359 kubelet[2961]: E0122 00:33:54.936321 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:54.936529 kubelet[2961]: E0122 00:33:54.936364 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:33:54.943730 kubelet[2961]: E0122 00:33:54.943544 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6cb56856bc03279e9e2de66fc40fb232fda092e2478e6061dc6a2c10b0241f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zqk9r" podUID="2cb94888-9f16-48f2-8fc7-64c6889ef0fc" Jan 22 00:33:54.945000 audit[4970]: USER_ACCT pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.950149 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 40662 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:54.952000 audit[4970]: CRED_ACQ pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:54.952000 audit[4970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9412a1f0 a2=3 a3=0 items=0 ppid=1 pid=4970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:54.952000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:54.957744 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:55.023037 containerd[1635]: time="2026-01-22T00:33:55.019992835Z" level=error msg="Failed to destroy network for sandbox \"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.026340 systemd[1]: run-netns-cni\x2d8bbc0977\x2dca52\x2dceb0\x2dacc4\x2d308a839689a6.mount: Deactivated successfully. Jan 22 00:33:55.050150 containerd[1635]: time="2026-01-22T00:33:55.049150016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.051075 kubelet[2961]: E0122 00:33:55.050367 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.051075 kubelet[2961]: E0122 00:33:55.050441 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:55.051075 kubelet[2961]: E0122 00:33:55.050558 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:33:55.052652 kubelet[2961]: E0122 00:33:55.051083 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c96fcb7674c152b729aa8a08983897dea041e1025916f77ac49fd2e1e6a41bcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:33:55.075588 systemd-logind[1609]: New session 16 of user core. Jan 22 00:33:55.095079 containerd[1635]: time="2026-01-22T00:33:55.094794386Z" level=error msg="Failed to destroy network for sandbox \"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.098129 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 22 00:33:55.110001 systemd[1]: run-netns-cni\x2dd31fe8e3\x2dab56\x2d2a58\x2d9f2b\x2dc5995045777b.mount: Deactivated successfully. Jan 22 00:33:55.126709 containerd[1635]: time="2026-01-22T00:33:55.125183717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.127160 kubelet[2961]: E0122 00:33:55.126365 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:55.129211 kubelet[2961]: E0122 00:33:55.126436 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:55.129300 kubelet[2961]: E0122 00:33:55.129222 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:33:55.129700 kubelet[2961]: E0122 00:33:55.129373 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd380c6acbe921592bee89e98b214fa82e338adcb2b41bcd168a6cd147fc4b36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:33:55.136000 audit[4970]: USER_START pid=4970 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.142000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.641589 sshd[5011]: Connection closed by 10.0.0.1 port 40662 Jan 22 00:33:55.646191 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:55.653000 audit[4970]: USER_END pid=4970 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.654000 audit[4970]: CRED_DISP pid=4970 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.25:22-10.0.0.1:40662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:55.668935 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:40662.service: Deactivated successfully. Jan 22 00:33:55.674414 systemd[1]: session-16.scope: Deactivated successfully. Jan 22 00:33:55.680950 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Jan 22 00:33:55.686262 systemd-logind[1609]: Removed session 16. Jan 22 00:33:55.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.25:22-10.0.0.1:40676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:55.700565 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:40676.service - OpenSSH per-connection server daemon (10.0.0.1:40676). Jan 22 00:33:55.898277 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 40676 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:33:55.897000 audit[5023]: USER_ACCT pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.903000 audit[5023]: CRED_ACQ pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.903000 audit[5023]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbdf69900 a2=3 a3=0 items=0 ppid=1 pid=5023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:33:55.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:33:55.906109 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:33:55.932071 systemd-logind[1609]: New session 17 of user core. Jan 22 00:33:55.947160 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 22 00:33:55.965000 audit[5023]: USER_START pid=5023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:55.970000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:56.051253 containerd[1635]: time="2026-01-22T00:33:56.051131035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:56.561967 sshd[5026]: Connection closed by 10.0.0.1 port 40676 Jan 22 00:33:56.569397 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Jan 22 00:33:56.572625 containerd[1635]: time="2026-01-22T00:33:56.572370616Z" level=error msg="Failed to destroy network for sandbox \"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:56.592971 systemd[1]: run-netns-cni\x2d609bfe3a\x2d532a\x2d384d\x2de8d5\x2d8f23c6c6f5c8.mount: Deactivated successfully. Jan 22 00:33:56.589000 audit[5023]: USER_END pid=5023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:56.600142 containerd[1635]: time="2026-01-22T00:33:56.599775467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:56.600688 kubelet[2961]: E0122 00:33:56.600634 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:56.606000 audit[5023]: CRED_DISP pid=5023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:33:56.607303 kubelet[2961]: E0122 00:33:56.605034 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:56.607303 kubelet[2961]: E0122 00:33:56.605083 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:33:56.607303 kubelet[2961]: E0122 00:33:56.605166 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00a4c82eb0369c28e7a8b7be6a384c74ee0af7d067bad47feee348533f5fee2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:33:56.624418 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:40676.service: Deactivated successfully. Jan 22 00:33:56.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.25:22-10.0.0.1:40676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:33:56.633124 systemd[1]: session-17.scope: Deactivated successfully. Jan 22 00:33:56.644183 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Jan 22 00:33:56.665082 systemd-logind[1609]: Removed session 17. Jan 22 00:33:57.996021 containerd[1635]: time="2026-01-22T00:33:57.995700039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:58.007009 containerd[1635]: time="2026-01-22T00:33:58.006731835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:33:58.007009 containerd[1635]: time="2026-01-22T00:33:58.006943945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,}" Jan 22 00:33:58.416950 containerd[1635]: time="2026-01-22T00:33:58.416730918Z" level=error msg="Failed to destroy network for sandbox \"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.422563 systemd[1]: run-netns-cni\x2d78cb3dbf\x2d04cf\x2dd5c6\x2d7ebb\x2df97db181dbd2.mount: Deactivated successfully. Jan 22 00:33:58.435440 containerd[1635]: time="2026-01-22T00:33:58.435180080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.435778 kubelet[2961]: E0122 00:33:58.435604 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.435778 kubelet[2961]: E0122 00:33:58.435674 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:58.435778 kubelet[2961]: E0122 00:33:58.435700 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" Jan 22 00:33:58.437560 kubelet[2961]: E0122 00:33:58.436702 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"178046f2b0a9f1d5f64bed630f4910483e35035feb6d53215809810dd31fd3c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:33:58.447094 containerd[1635]: time="2026-01-22T00:33:58.447033947Z" level=error msg="Failed to destroy network for sandbox \"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.452092 systemd[1]: run-netns-cni\x2d29eeacbd\x2dc275\x2d61c1\x2db457\x2d8865b3588722.mount: Deactivated successfully. Jan 22 00:33:58.464351 containerd[1635]: time="2026-01-22T00:33:58.463794953Z" level=error msg="Failed to destroy network for sandbox \"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.467020 containerd[1635]: time="2026-01-22T00:33:58.466729901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.470759 kubelet[2961]: E0122 00:33:58.469345 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.470759 kubelet[2961]: E0122 00:33:58.469412 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:58.470759 kubelet[2961]: E0122 00:33:58.469708 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:33:58.471096 kubelet[2961]: E0122 00:33:58.469771 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8243313c60575e7b3d94719f786e4ee8aa2b59dc3ee7af51389fc7b60054a7da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:33:58.475727 containerd[1635]: time="2026-01-22T00:33:58.475064441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.476776 kubelet[2961]: E0122 00:33:58.475534 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:33:58.476776 kubelet[2961]: E0122 00:33:58.475612 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:58.476776 kubelet[2961]: E0122 00:33:58.475647 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df8f6b8cf-s2hhm" Jan 22 00:33:58.477141 kubelet[2961]: E0122 00:33:58.475719 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b04f8827b074e7cd5032fa81a95709e8bb2f6a1cd9ee7cc40215dfe4bbad018b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df8f6b8cf-s2hhm" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" Jan 22 00:33:59.016272 systemd[1]: run-netns-cni\x2dadad3c78\x2deee0\x2d425f\x2d3ba8\x2d9b23d9e40be7.mount: Deactivated successfully. Jan 22 00:33:59.991201 kubelet[2961]: E0122 00:33:59.990718 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:00.989027 kubelet[2961]: E0122 00:34:00.987624 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:01.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:40678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:01.596702 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:40678.service - OpenSSH per-connection server daemon (10.0.0.1:40678). Jan 22 00:34:01.682339 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 00:34:01.682604 kernel: audit: type=1130 audit(1769042041.596:642): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:40678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:02.065000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.095995 systemd-logind[1609]: New session 18 of user core. Jan 22 00:34:02.076320 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:02.107926 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 40678 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:02.109660 kernel: audit: type=1101 audit(1769042042.065:643): pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.110027 kernel: audit: type=1103 audit(1769042042.071:644): pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.071000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.177176 kernel: audit: type=1006 audit(1769042042.071:645): pid=5172 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 22 00:34:02.071000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3832400 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:02.178275 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 22 00:34:02.222724 kernel: audit: type=1300 audit(1769042042.071:645): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3832400 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:02.222987 kernel: audit: type=1327 audit(1769042042.071:645): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:02.071000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:02.189000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.196000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.362734 kernel: audit: type=1105 audit(1769042042.189:646): pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.363089 kernel: audit: type=1103 audit(1769042042.196:647): pid=5175 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.724711 sshd[5175]: Connection closed by 10.0.0.1 port 40678 Jan 22 00:34:02.725579 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:02.729000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.774066 kernel: audit: type=1106 audit(1769042042.729:648): pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.774655 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:40678.service: Deactivated successfully. Jan 22 00:34:02.729000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.797433 systemd[1]: session-18.scope: Deactivated successfully. Jan 22 00:34:02.806019 kernel: audit: type=1104 audit(1769042042.729:649): pid=5172 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:02.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.25:22-10.0.0.1:40678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:02.810191 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Jan 22 00:34:02.819425 systemd-logind[1609]: Removed session 18. Jan 22 00:34:04.046227 kubelet[2961]: E0122 00:34:04.043173 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:04.096668 containerd[1635]: time="2026-01-22T00:34:04.091455632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:06.633216 containerd[1635]: time="2026-01-22T00:34:06.631630031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:34:06.638976 containerd[1635]: time="2026-01-22T00:34:06.637178296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:06.928133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469958007.mount: Deactivated successfully. Jan 22 00:34:07.625424 containerd[1635]: time="2026-01-22T00:34:07.625362385Z" level=error msg="Failed to destroy network for sandbox \"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.641285 systemd[1]: run-netns-cni\x2dcdb920da\x2d3218\x2d5053\x2d4f89\x2dc50bf1b81b52.mount: Deactivated successfully. Jan 22 00:34:07.662770 containerd[1635]: time="2026-01-22T00:34:07.662409996Z" level=error msg="Failed to destroy network for sandbox \"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.671208 systemd[1]: run-netns-cni\x2d55dff7a4\x2d43c5\x2d38e5\x2dcb89\x2d8babb30e0a1b.mount: Deactivated successfully. Jan 22 00:34:07.695964 containerd[1635]: time="2026-01-22T00:34:07.692129117Z" level=error msg="Failed to destroy network for sandbox \"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.703150 systemd[1]: run-netns-cni\x2db7852cde\x2dbcb5\x2d449c\x2d68ca\x2d6ae91c7e7105.mount: Deactivated successfully. Jan 22 00:34:07.715919 containerd[1635]: time="2026-01-22T00:34:07.713750206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.716439 kubelet[2961]: E0122 00:34:07.715285 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.717642 kubelet[2961]: E0122 00:34:07.716490 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:34:07.717642 kubelet[2961]: E0122 00:34:07.716679 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" Jan 22 00:34:07.719709 kubelet[2961]: E0122 00:34:07.716789 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f115fa193ce075ba49fc2c015cd1a25264956b47cadd4747cc9397c3d3c85477\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:34:07.721698 containerd[1635]: time="2026-01-22T00:34:07.721303496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.723126 kubelet[2961]: E0122 00:34:07.722487 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.723126 kubelet[2961]: E0122 00:34:07.722640 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:34:07.723126 kubelet[2961]: E0122 00:34:07.722667 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" Jan 22 00:34:07.726424 kubelet[2961]: E0122 00:34:07.726023 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf0c1b42b01ed57fe4b1c982f16ef1f2727dd16a38274ac9c4731119cb83a547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:34:07.740017 containerd[1635]: time="2026-01-22T00:34:07.739632067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.746368 kubelet[2961]: E0122 00:34:07.744742 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:07.746368 kubelet[2961]: E0122 00:34:07.745376 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:34:07.746368 kubelet[2961]: E0122 00:34:07.745412 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlbdn" Jan 22 00:34:07.765379 kubelet[2961]: E0122 00:34:07.748220 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlbdn_kube-system(1fc2a2d5-dab7-482f-b368-90c3db40ee93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5647c16c99e68a1e0e049c94f93625d36beedb7b7079d9ea6b9882939146563\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlbdn" podUID="1fc2a2d5-dab7-482f-b368-90c3db40ee93" Jan 22 00:34:07.770422 containerd[1635]: time="2026-01-22T00:34:07.750270831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:07.771450 containerd[1635]: time="2026-01-22T00:34:07.771299818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880766" Jan 22 00:34:07.778309 containerd[1635]: time="2026-01-22T00:34:07.778197114Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:07.799033 containerd[1635]: time="2026-01-22T00:34:07.796487874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:34:07.847404 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:34:07.847735 kernel: audit: type=1130 audit(1769042047.801:651): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:07.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:07.848332 containerd[1635]: time="2026-01-22T00:34:07.820248010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 58.705629034s" Jan 22 00:34:07.848332 containerd[1635]: time="2026-01-22T00:34:07.820298054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 22 00:34:07.802080 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:53880.service - OpenSSH per-connection server daemon (10.0.0.1:53880). Jan 22 00:34:08.013139 containerd[1635]: time="2026-01-22T00:34:08.006006190Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 22 00:34:08.031058 containerd[1635]: time="2026-01-22T00:34:08.029018360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:08.387644 containerd[1635]: time="2026-01-22T00:34:08.387494563Z" level=info msg="Container 9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:08.503000 audit[5288]: USER_ACCT pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.515076 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:08.523283 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 53880 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:08.540181 systemd-logind[1609]: New session 19 of user core. Jan 22 00:34:08.508000 audit[5288]: CRED_ACQ pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.571721 containerd[1635]: time="2026-01-22T00:34:08.571404758Z" level=info msg="CreateContainer within sandbox \"060b3c98319fdb0a27bfaa3003e83b54e3af2ff70f599b40c733cf45a3381271\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da\"" Jan 22 00:34:08.582930 containerd[1635]: time="2026-01-22T00:34:08.581306953Z" level=info msg="StartContainer for \"9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da\"" Jan 22 00:34:08.601631 kernel: audit: type=1101 audit(1769042048.503:652): pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.607181 kernel: audit: type=1103 audit(1769042048.508:653): pid=5288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.607250 kernel: audit: type=1006 audit(1769042048.509:654): pid=5288 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 22 00:34:08.509000 audit[5288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbdd2aab0 a2=3 a3=0 items=0 ppid=1 pid=5288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:08.670003 containerd[1635]: time="2026-01-22T00:34:08.669673588Z" level=info msg="connecting to shim 9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da" address="unix:///run/containerd/s/39356ce56dbcdb71eb22ee925c9517c9e05605054c892197b46c1cd6d35d935c" protocol=ttrpc version=3 Jan 22 00:34:08.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:08.706007 kernel: audit: type=1300 audit(1769042048.509:654): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbdd2aab0 a2=3 a3=0 items=0 ppid=1 pid=5288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:08.706170 kernel: audit: type=1327 audit(1769042048.509:654): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:08.711989 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 22 00:34:08.756000 audit[5288]: USER_START pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.785000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.869158 kernel: audit: type=1105 audit(1769042048.756:655): pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.869325 kernel: audit: type=1103 audit(1769042048.785:656): pid=5322 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:08.995751 containerd[1635]: time="2026-01-22T00:34:08.994466594Z" level=error msg="Failed to destroy network for sandbox \"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.016273 systemd[1]: run-netns-cni\x2d5dc5d7bd\x2d322e\x2dc0c9\x2d7bef\x2d00f106b2393e.mount: Deactivated successfully. Jan 22 00:34:09.038672 containerd[1635]: time="2026-01-22T00:34:09.036084199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:09.038672 containerd[1635]: time="2026-01-22T00:34:09.036215324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:34:09.061121 systemd[1]: Started cri-containerd-9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da.scope - libcontainer container 9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da. Jan 22 00:34:09.093351 containerd[1635]: time="2026-01-22T00:34:09.087461422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.100963 kubelet[2961]: E0122 00:34:09.097716 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.100963 kubelet[2961]: E0122 00:34:09.099649 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:34:09.113406 kubelet[2961]: E0122 00:34:09.108782 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-956jd" Jan 22 00:34:09.113406 kubelet[2961]: E0122 00:34:09.112693 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20def5d5b21947cdb24efd401efa8bdfd24b508f26efed38e69d4d946c9bbe37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:09.425015 sshd[5322]: Connection closed by 10.0.0.1 port 53880 Jan 22 00:34:09.423248 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:09.432000 audit[5288]: USER_END pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:09.442303 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:53880.service: Deactivated successfully. Jan 22 00:34:09.453673 systemd[1]: session-19.scope: Deactivated successfully. Jan 22 00:34:09.457482 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Jan 22 00:34:09.462493 systemd-logind[1609]: Removed session 19. Jan 22 00:34:09.504119 kernel: audit: type=1106 audit(1769042049.432:657): pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:09.504243 kernel: audit: type=1104 audit(1769042049.433:658): pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:09.433000 audit[5288]: CRED_DISP pid=5288 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:09.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.25:22-10.0.0.1:53880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:09.600000 audit: BPF prog-id=170 op=LOAD Jan 22 00:34:09.600000 audit[5323]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3494 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:09.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961666237353562393438613463336464363938633963353639616465 Jan 22 00:34:09.600000 audit: BPF prog-id=171 op=LOAD Jan 22 00:34:09.600000 audit[5323]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3494 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:09.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961666237353562393438613463336464363938633963353639616465 Jan 22 00:34:09.600000 audit: BPF prog-id=171 op=UNLOAD Jan 22 00:34:09.600000 audit[5323]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:09.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961666237353562393438613463336464363938633963353639616465 Jan 22 00:34:09.600000 audit: BPF prog-id=170 op=UNLOAD Jan 22 00:34:09.600000 audit[5323]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3494 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:09.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961666237353562393438613463336464363938633963353639616465 Jan 22 00:34:09.600000 audit: BPF prog-id=172 op=LOAD Jan 22 00:34:09.600000 audit[5323]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3494 pid=5323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:09.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961666237353562393438613463336464363938633963353639616465 Jan 22 00:34:09.884793 containerd[1635]: time="2026-01-22T00:34:09.881755181Z" level=info msg="StartContainer for \"9afb755b948a4c3dd698c9c569aded86305372393fc5a1593afc79aa4b34e5da\" returns successfully" Jan 22 00:34:09.922701 containerd[1635]: time="2026-01-22T00:34:09.922266663Z" level=error msg="Failed to destroy network for sandbox \"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.935762 systemd[1]: run-netns-cni\x2d7da5171c\x2dcf87\x2d9f52\x2d2f8e\x2d9999acfd2424.mount: Deactivated successfully. Jan 22 00:34:09.963720 containerd[1635]: time="2026-01-22T00:34:09.963121750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.966711 kubelet[2961]: E0122 00:34:09.966049 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.966711 kubelet[2961]: E0122 00:34:09.966135 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:34:09.966711 kubelet[2961]: E0122 00:34:09.966164 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfg9f" Jan 22 00:34:09.967131 kubelet[2961]: E0122 00:34:09.966234 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f7919fcce901ee440c9e092c05db60ec6097bd90d16c8570bc81bc5f9221db7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:34:09.981380 containerd[1635]: time="2026-01-22T00:34:09.981318088Z" level=error msg="Failed to destroy network for sandbox \"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.986397 systemd[1]: run-netns-cni\x2d823d4c62\x2dc32e\x2d1ec5\x2d91a6\x2da60709d5558b.mount: Deactivated successfully. Jan 22 00:34:09.997115 containerd[1635]: time="2026-01-22T00:34:09.996750425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.997405 kubelet[2961]: E0122 00:34:09.997168 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:09.997405 kubelet[2961]: E0122 00:34:09.997223 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:34:09.997405 kubelet[2961]: E0122 00:34:09.997247 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" Jan 22 00:34:09.997663 kubelet[2961]: E0122 00:34:09.997314 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e6164e700c846175ebcb942566d650ace23ee5228018ef0951fd0c7829f766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:09.999406 kubelet[2961]: E0122 00:34:09.999223 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:10.000205 kubelet[2961]: E0122 00:34:10.000032 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:10.017757 kubelet[2961]: E0122 00:34:10.016760 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:10.018678 containerd[1635]: time="2026-01-22T00:34:10.018186043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:10.395457 containerd[1635]: time="2026-01-22T00:34:10.395295153Z" level=error msg="Failed to destroy network for sandbox \"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:10.409689 systemd[1]: run-netns-cni\x2d9aa267fb\x2de89b\x2da99f\x2d988c\x2dde58e30e4964.mount: Deactivated successfully. Jan 22 00:34:10.414162 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 22 00:34:10.414243 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 22 00:34:10.434954 containerd[1635]: time="2026-01-22T00:34:10.433638619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:10.436303 kubelet[2961]: E0122 00:34:10.436266 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:34:10.438489 kubelet[2961]: E0122 00:34:10.436944 2961 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:34:10.438489 kubelet[2961]: E0122 00:34:10.436972 2961 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zqk9r" Jan 22 00:34:10.440657 kubelet[2961]: E0122 00:34:10.438697 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zqk9r_kube-system(2cb94888-9f16-48f2-8fc7-64c6889ef0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f72ba8c23d322074d99a0cc6d48b4964cbb1955034103579155277191982b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zqk9r" podUID="2cb94888-9f16-48f2-8fc7-64c6889ef0fc" Jan 22 00:34:10.703039 kubelet[2961]: E0122 00:34:10.697772 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:10.893674 kubelet[2961]: I0122 00:34:10.893260 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h9m68" podStartSLOduration=5.760241038 podStartE2EDuration="1m38.893234418s" podCreationTimestamp="2026-01-22 00:32:32 +0000 UTC" firstStartedPulling="2026-01-22 00:32:34.692068896 +0000 UTC m=+74.720974460" lastFinishedPulling="2026-01-22 00:34:07.825062276 +0000 UTC m=+167.853967840" observedRunningTime="2026-01-22 00:34:10.890759158 +0000 UTC m=+170.919664752" watchObservedRunningTime="2026-01-22 00:34:10.893234418 +0000 UTC m=+170.922140002" Jan 22 00:34:11.022690 containerd[1635]: time="2026-01-22T00:34:11.022134681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:11.713042 kubelet[2961]: E0122 00:34:11.712775 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:13.011370 containerd[1635]: time="2026-01-22T00:34:13.010990396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:34:13.071173 systemd-networkd[1529]: calife450f84e1b: Link UP Jan 22 00:34:13.076285 systemd-networkd[1529]: calife450f84e1b: Gained carrier Jan 22 00:34:13.207299 containerd[1635]: 2026-01-22 00:34:11.275 [INFO][5493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:34:13.207299 containerd[1635]: 2026-01-22 00:34:11.542 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0 whisker-5df8f6b8cf- calico-system bbc09f02-5803-4a1d-8b41-1e543cceb488 1369 0 2026-01-22 00:32:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5df8f6b8cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5df8f6b8cf-s2hhm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife450f84e1b [] [] }} ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-" Jan 22 00:34:13.207299 containerd[1635]: 2026-01-22 00:34:11.542 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.207299 containerd[1635]: 2026-01-22 00:34:12.384 [INFO][5534] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.388 [INFO][5534] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5df8f6b8cf-s2hhm", "timestamp":"2026-01-22 00:34:12.384460246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.389 [INFO][5534] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.389 [INFO][5534] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.390 [INFO][5534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.441 [INFO][5534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.528 [INFO][5534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.581 [INFO][5534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.593 [INFO][5534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.616 [INFO][5534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:13.211046 containerd[1635]: 2026-01-22 00:34:12.616 [INFO][5534] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.211764 containerd[1635]: 2026-01-22 00:34:12.661 [INFO][5534] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb Jan 22 00:34:13.211764 containerd[1635]: 2026-01-22 00:34:12.693 [INFO][5534] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.716 [ERROR][5534] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1379", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc0001889c0), Allocations:[]*int{(*int)(0xc00018c528), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003c7150), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5df8f6b8cf-s2hhm", "timestamp":"2026-01-22 00:34:12.384460246 +0000 UTC"}}}, SequenceNumber:0x188ce65fb9c9844b, SequenceNumberForAllocation:map[string]uint64{"0":0x188ce65fb9c9844a}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.717 [INFO][5534] ipam/ipam.go 1250: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.826 [INFO][5534] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.839 [INFO][5534] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.889 [INFO][5534] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.930 [INFO][5534] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.930 [INFO][5534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" host="localhost" Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.942 [INFO][5534] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:13.212017 containerd[1635]: 2026-01-22 00:34:12.947 [INFO][5534] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:12.962 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0", GenerateName:"whisker-5df8f6b8cf-", Namespace:"calico-system", SelfLink:"", UID:"bbc09f02-5803-4a1d-8b41-1e543cceb488", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df8f6b8cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5df8f6b8cf-s2hhm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife450f84e1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:12.963 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:12.963 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife450f84e1b ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:13.072 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:13.084 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0", GenerateName:"whisker-5df8f6b8cf-", Namespace:"calico-system", SelfLink:"", UID:"bbc09f02-5803-4a1d-8b41-1e543cceb488", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df8f6b8cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb", Pod:"whisker-5df8f6b8cf-s2hhm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife450f84e1b", MAC:"9a:ae:16:46:bd:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:13.214201 containerd[1635]: 2026-01-22 00:34:13.181 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Namespace="calico-system" Pod="whisker-5df8f6b8cf-s2hhm" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:13.559940 containerd[1635]: time="2026-01-22T00:34:13.552151671Z" level=info msg="connecting to shim ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" address="unix:///run/containerd/s/8bfafc2cf0f8baeaeca5c4dd19064ead886577b3c163e57d6ef3af7cbf9b5c5c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:13.721254 systemd[1]: Started cri-containerd-ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb.scope - libcontainer container ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb. Jan 22 00:34:13.859000 audit: BPF prog-id=173 op=LOAD Jan 22 00:34:13.880289 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 22 00:34:13.880437 kernel: audit: type=1334 audit(1769042053.859:665): prog-id=173 op=LOAD Jan 22 00:34:13.880000 audit: BPF prog-id=174 op=LOAD Jan 22 00:34:13.880000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.927509 kernel: audit: type=1334 audit(1769042053.880:666): prog-id=174 op=LOAD Jan 22 00:34:13.927682 kernel: audit: type=1300 audit(1769042053.880:666): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.919054 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:13.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.977003 kernel: audit: type=1327 audit(1769042053.880:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.880000 audit: BPF prog-id=174 op=UNLOAD Jan 22 00:34:13.880000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:14.041373 systemd-networkd[1529]: cali56cdaae838f: Link UP Jan 22 00:34:14.066359 systemd-networkd[1529]: cali56cdaae838f: Gained carrier Jan 22 00:34:14.077968 kernel: audit: type=1334 audit(1769042053.880:667): prog-id=174 op=UNLOAD Jan 22 00:34:14.078625 kernel: audit: type=1300 audit(1769042053.880:667): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:14.078654 kernel: audit: type=1327 audit(1769042053.880:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:14.169970 kernel: audit: type=1334 audit(1769042053.881:668): prog-id=175 op=LOAD Jan 22 00:34:13.881000 audit: BPF prog-id=175 op=LOAD Jan 22 00:34:13.881000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.311 [INFO][5584] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.385 [INFO][5584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0 calico-apiserver-567c775cb4- calico-apiserver a8109f84-107e-4926-bb88-cd99083f8125 1059 0 2026-01-22 00:32:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567c775cb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567c775cb4-2tqd7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56cdaae838f [] [] }} ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.390 [INFO][5584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.669 [INFO][5604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" HandleID="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Workload="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.670 [INFO][5604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" HandleID="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Workload="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004364d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567c775cb4-2tqd7", "timestamp":"2026-01-22 00:34:13.669521188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.670 [INFO][5604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.670 [INFO][5604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.670 [INFO][5604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.726 [INFO][5604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.778 [INFO][5604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.829 [INFO][5604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.840 [INFO][5604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.876 [INFO][5604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.878 [INFO][5604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.892 [INFO][5604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786 Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.907 [INFO][5604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.972 [INFO][5604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.975 [INFO][5604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" host="localhost" Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.978 [INFO][5604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:14.227785 containerd[1635]: 2026-01-22 00:34:13.980 [INFO][5604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" HandleID="k8s-pod-network.7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Workload="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.013 [INFO][5584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0", GenerateName:"calico-apiserver-567c775cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8109f84-107e-4926-bb88-cd99083f8125", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567c775cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567c775cb4-2tqd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56cdaae838f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.016 [INFO][5584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.019 [INFO][5584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56cdaae838f ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.078 [INFO][5584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.084 [INFO][5584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0", GenerateName:"calico-apiserver-567c775cb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8109f84-107e-4926-bb88-cd99083f8125", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567c775cb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786", Pod:"calico-apiserver-567c775cb4-2tqd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56cdaae838f", MAC:"5e:f9:b6:ef:14:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:14.234068 containerd[1635]: 2026-01-22 00:34:14.197 [INFO][5584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" Namespace="calico-apiserver" Pod="calico-apiserver-567c775cb4-2tqd7" WorkloadEndpoint="localhost-k8s-calico--apiserver--567c775cb4--2tqd7-eth0" Jan 22 00:34:14.236944 kernel: audit: type=1300 audit(1769042053.881:668): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:14.324478 kernel: audit: type=1327 audit(1769042053.881:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.881000 audit: BPF prog-id=176 op=LOAD Jan 22 00:34:13.881000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.881000 audit: BPF prog-id=176 op=UNLOAD Jan 22 00:34:13.881000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.881000 audit: BPF prog-id=175 op=UNLOAD Jan 22 00:34:13.881000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:13.881000 audit: BPF prog-id=177 op=LOAD Jan 22 00:34:13.881000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5617 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:13.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666633564346262393732643534343261623665393965633535383934 Jan 22 00:34:14.414284 containerd[1635]: time="2026-01-22T00:34:14.414121019Z" level=info msg="connecting to shim 7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786" address="unix:///run/containerd/s/ebd58e52f4361ca3abb8a1946842a2ae11e709758b33baa94d87e6e701b28d75" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:14.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.25:22-10.0.0.1:33302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:14.453482 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:33302.service - OpenSSH per-connection server daemon (10.0.0.1:33302). Jan 22 00:34:14.780205 containerd[1635]: time="2026-01-22T00:34:14.779770075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df8f6b8cf-s2hhm,Uid:bbc09f02-5803-4a1d-8b41-1e543cceb488,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\"" Jan 22 00:34:14.822340 systemd-networkd[1529]: calife450f84e1b: Gained IPv6LL Jan 22 00:34:14.884000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:14.888173 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 33302 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:14.898040 containerd[1635]: time="2026-01-22T00:34:14.897426165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:34:14.899000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:14.901000 audit[5735]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcff877300 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:14.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:14.906500 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:14.978131 systemd-logind[1609]: New session 20 of user core. Jan 22 00:34:14.985001 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 22 00:34:15.023000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:15.033000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:15.073954 containerd[1635]: time="2026-01-22T00:34:15.072651412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:15.099120 containerd[1635]: time="2026-01-22T00:34:15.098628343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:34:15.106348 containerd[1635]: time="2026-01-22T00:34:15.103674301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:15.107687 kubelet[2961]: E0122 00:34:15.107290 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:34:15.121201 systemd[1]: Started cri-containerd-7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786.scope - libcontainer container 7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786. Jan 22 00:34:15.130114 kubelet[2961]: E0122 00:34:15.128126 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:34:15.130114 kubelet[2961]: E0122 00:34:15.128315 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:15.163728 containerd[1635]: time="2026-01-22T00:34:15.160355337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:34:15.326535 containerd[1635]: time="2026-01-22T00:34:15.325699148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:15.336931 containerd[1635]: time="2026-01-22T00:34:15.336051089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:34:15.337097 kubelet[2961]: E0122 00:34:15.336376 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:34:15.337097 kubelet[2961]: E0122 00:34:15.336443 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:34:15.337097 kubelet[2961]: E0122 00:34:15.336533 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5df8f6b8cf-s2hhm_calico-system(bbc09f02-5803-4a1d-8b41-1e543cceb488): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:15.337097 kubelet[2961]: E0122 00:34:15.336725 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df8f6b8cf-s2hhm" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" Jan 22 00:34:15.338536 containerd[1635]: time="2026-01-22T00:34:15.338232581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:15.492000 audit: BPF prog-id=178 op=LOAD Jan 22 00:34:15.493000 audit: BPF prog-id=179 op=LOAD Jan 22 00:34:15.493000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e238 a2=98 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.493000 audit: BPF prog-id=179 op=UNLOAD Jan 22 00:34:15.493000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.494000 audit: BPF prog-id=180 op=LOAD Jan 22 00:34:15.494000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e488 a2=98 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.494000 audit: BPF prog-id=181 op=LOAD Jan 22 00:34:15.494000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017e218 a2=98 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.494000 audit: BPF prog-id=181 op=UNLOAD Jan 22 00:34:15.494000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.494000 audit: BPF prog-id=180 op=UNLOAD Jan 22 00:34:15.494000 audit[5775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.494000 audit: BPF prog-id=182 op=LOAD Jan 22 00:34:15.494000 audit[5775]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017e6e8 a2=98 a3=0 items=0 ppid=5715 pid=5775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:15.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737363063666662636261363130313763323237633964373039373336 Jan 22 00:34:15.522930 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:15.534745 sshd[5785]: Connection closed by 10.0.0.1 port 33302 Jan 22 00:34:15.537481 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:15.553000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:15.554000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:15.563395 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Jan 22 00:34:15.565372 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:33302.service: Deactivated successfully. Jan 22 00:34:15.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.25:22-10.0.0.1:33302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:15.579053 systemd[1]: session-20.scope: Deactivated successfully. Jan 22 00:34:15.591235 systemd-logind[1609]: Removed session 20. Jan 22 00:34:15.784325 containerd[1635]: time="2026-01-22T00:34:15.782762736Z" level=info msg="StopPodSandbox for \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\"" Jan 22 00:34:15.939723 containerd[1635]: time="2026-01-22T00:34:15.938145825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567c775cb4-2tqd7,Uid:a8109f84-107e-4926-bb88-cd99083f8125,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7760cffbcba61017c227c9d709736967bc0c7caa83d24e6b098e55bc6a6a2786\"" Jan 22 00:34:15.971063 containerd[1635]: time="2026-01-22T00:34:15.964389674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:16.008981 systemd[1]: cri-containerd-ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb.scope: Deactivated successfully. Jan 22 00:34:16.015000 audit: BPF prog-id=173 op=UNLOAD Jan 22 00:34:16.015000 audit: BPF prog-id=177 op=UNLOAD Jan 22 00:34:16.020198 systemd-networkd[1529]: cali56cdaae838f: Gained IPv6LL Jan 22 00:34:16.042947 containerd[1635]: time="2026-01-22T00:34:16.037717379Z" level=info msg="received sandbox exit event container_id:\"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" id:\"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" exit_status:137 exited_at:{seconds:1769042056 nanos:31202231}" monitor_name=podsandbox Jan 22 00:34:16.086125 containerd[1635]: time="2026-01-22T00:34:16.078374529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:16.088424 containerd[1635]: time="2026-01-22T00:34:16.087510936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:16.088424 containerd[1635]: time="2026-01-22T00:34:16.087777934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:16.101121 kubelet[2961]: E0122 00:34:16.089025 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:16.101121 kubelet[2961]: E0122 00:34:16.089350 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:16.101121 kubelet[2961]: E0122 00:34:16.089455 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:16.101121 kubelet[2961]: E0122 00:34:16.089500 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:16.310000 audit[5846]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5846 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:16.310000 audit[5846]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf139a290 a2=0 a3=7ffdf139a27c items=0 ppid=3099 pid=5846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:16.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:16.334201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb-rootfs.mount: Deactivated successfully. Jan 22 00:34:16.331000 audit[5846]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=5846 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:16.331000 audit[5846]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf139a290 a2=0 a3=0 items=0 ppid=3099 pid=5846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:16.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:16.435943 containerd[1635]: time="2026-01-22T00:34:16.435435540Z" level=info msg="shim disconnected" id=ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb namespace=k8s.io Jan 22 00:34:16.435943 containerd[1635]: time="2026-01-22T00:34:16.435625023Z" level=info msg="cleaning up after shim disconnected" id=ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb namespace=k8s.io Jan 22 00:34:16.459088 containerd[1635]: time="2026-01-22T00:34:16.435644168Z" level=info msg="cleaning up dead shim" id=ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb namespace=k8s.io Jan 22 00:34:16.674226 containerd[1635]: time="2026-01-22T00:34:16.673222992Z" level=info msg="received sandbox container exit event sandbox_id:\"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" exit_status:137 exited_at:{seconds:1769042056 nanos:31202231}" monitor_name=criService Jan 22 00:34:16.691178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb-shm.mount: Deactivated successfully. Jan 22 00:34:16.791073 kubelet[2961]: I0122 00:34:16.790767 2961 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:16.803035 kubelet[2961]: E0122 00:34:16.802418 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:16.989515 kubelet[2961]: E0122 00:34:16.988666 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:17.112000 audit: BPF prog-id=183 op=LOAD Jan 22 00:34:17.112000 audit[5909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24a6a540 a2=98 a3=1fffffffffffffff items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.112000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.113000 audit: BPF prog-id=183 op=UNLOAD Jan 22 00:34:17.113000 audit[5909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe24a6a510 a3=0 items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.113000 audit: BPF prog-id=184 op=LOAD Jan 22 00:34:17.113000 audit[5909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24a6a420 a2=94 a3=3 items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.113000 audit: BPF prog-id=184 op=UNLOAD Jan 22 00:34:17.113000 audit[5909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe24a6a420 a2=94 a3=3 items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.113000 audit: BPF prog-id=185 op=LOAD Jan 22 00:34:17.113000 audit[5909]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe24a6a460 a2=94 a3=7ffe24a6a640 items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.113000 audit: BPF prog-id=185 op=UNLOAD Jan 22 00:34:17.113000 audit[5909]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe24a6a460 a2=94 a3=7ffe24a6a640 items=0 ppid=5682 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.113000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:34:17.122196 systemd-networkd[1529]: calife450f84e1b: Link DOWN Jan 22 00:34:17.122216 systemd-networkd[1529]: calife450f84e1b: Lost carrier Jan 22 00:34:17.179000 audit: BPF prog-id=186 op=LOAD Jan 22 00:34:17.179000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe55a10130 a2=98 a3=3 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.179000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.180000 audit: BPF prog-id=186 op=UNLOAD Jan 22 00:34:17.180000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe55a10100 a3=0 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.180000 audit: BPF prog-id=187 op=LOAD Jan 22 00:34:17.180000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe55a0ff20 a2=94 a3=54428f items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.180000 audit: BPF prog-id=187 op=UNLOAD Jan 22 00:34:17.180000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe55a0ff20 a2=94 a3=54428f items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.180000 audit: BPF prog-id=188 op=LOAD Jan 22 00:34:17.180000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe55a0ff50 a2=94 a3=2 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.180000 audit: BPF prog-id=188 op=UNLOAD Jan 22 00:34:17.180000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe55a0ff50 a2=0 a3=2 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:17.481000 audit[5919]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:17.481000 audit[5919]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdac202cb0 a2=0 a3=7ffdac202c9c items=0 ppid=3099 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:17.493000 audit[5919]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:17.493000 audit[5919]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdac202cb0 a2=0 a3=0 items=0 ppid=3099 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:17.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.119 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.119 [INFO][5898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" iface="eth0" netns="/var/run/netns/cni-1dc9ecad-fc89-6e38-2224-3f54ab429513" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.119 [INFO][5898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" iface="eth0" netns="/var/run/netns/cni-1dc9ecad-fc89-6e38-2224-3f54ab429513" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.208 [INFO][5898] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" after=89.252206ms iface="eth0" netns="/var/run/netns/cni-1dc9ecad-fc89-6e38-2224-3f54ab429513" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.209 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.209 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.477 [INFO][5912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.479 [INFO][5912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.479 [INFO][5912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.726 [INFO][5912] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.726 [INFO][5912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.741 [INFO][5912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:17.777430 containerd[1635]: 2026-01-22 00:34:17.766 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:17.799702 containerd[1635]: time="2026-01-22T00:34:17.788235965Z" level=info msg="TearDown network for sandbox \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" successfully" Jan 22 00:34:17.799702 containerd[1635]: time="2026-01-22T00:34:17.788326425Z" level=info msg="StopPodSandbox for \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" returns successfully" Jan 22 00:34:17.794216 systemd[1]: run-netns-cni\x2d1dc9ecad\x2dfc89\x2d6e38\x2d2224\x2d3f54ab429513.mount: Deactivated successfully. Jan 22 00:34:17.839747 kubelet[2961]: E0122 00:34:17.839432 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:18.063430 kubelet[2961]: I0122 00:34:18.063138 2961 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-ca-bundle\") pod \"bbc09f02-5803-4a1d-8b41-1e543cceb488\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " Jan 22 00:34:18.063430 kubelet[2961]: I0122 00:34:18.063354 2961 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn7xg\" (UniqueName: \"kubernetes.io/projected/bbc09f02-5803-4a1d-8b41-1e543cceb488-kube-api-access-sn7xg\") pod \"bbc09f02-5803-4a1d-8b41-1e543cceb488\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " Jan 22 00:34:18.063430 kubelet[2961]: I0122 00:34:18.063390 2961 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-backend-key-pair\") pod \"bbc09f02-5803-4a1d-8b41-1e543cceb488\" (UID: \"bbc09f02-5803-4a1d-8b41-1e543cceb488\") " Jan 22 00:34:18.065425 kubelet[2961]: I0122 00:34:18.065300 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bbc09f02-5803-4a1d-8b41-1e543cceb488" (UID: "bbc09f02-5803-4a1d-8b41-1e543cceb488"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 00:34:18.090086 systemd[1]: var-lib-kubelet-pods-bbc09f02\x2d5803\x2d4a1d\x2d8b41\x2d1e543cceb488-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 22 00:34:18.090263 systemd[1]: var-lib-kubelet-pods-bbc09f02\x2d5803\x2d4a1d\x2d8b41\x2d1e543cceb488-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn7xg.mount: Deactivated successfully. Jan 22 00:34:18.095468 kubelet[2961]: I0122 00:34:18.095414 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bbc09f02-5803-4a1d-8b41-1e543cceb488" (UID: "bbc09f02-5803-4a1d-8b41-1e543cceb488"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 00:34:18.111951 kubelet[2961]: I0122 00:34:18.105323 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc09f02-5803-4a1d-8b41-1e543cceb488-kube-api-access-sn7xg" (OuterVolumeSpecName: "kube-api-access-sn7xg") pod "bbc09f02-5803-4a1d-8b41-1e543cceb488" (UID: "bbc09f02-5803-4a1d-8b41-1e543cceb488"). InnerVolumeSpecName "kube-api-access-sn7xg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 00:34:18.124000 audit: BPF prog-id=189 op=LOAD Jan 22 00:34:18.124000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe55a0fe10 a2=94 a3=1 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.124000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.125000 audit: BPF prog-id=189 op=UNLOAD Jan 22 00:34:18.125000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe55a0fe10 a2=94 a3=1 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.125000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.167000 audit: BPF prog-id=190 op=LOAD Jan 22 00:34:18.167000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe55a0fe00 a2=94 a3=4 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.167000 audit: BPF prog-id=190 op=UNLOAD Jan 22 00:34:18.167000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe55a0fe00 a2=0 a3=4 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.167000 audit: BPF prog-id=191 op=LOAD Jan 22 00:34:18.167000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe55a0fc60 a2=94 a3=5 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.167000 audit: BPF prog-id=191 op=UNLOAD Jan 22 00:34:18.167000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe55a0fc60 a2=0 a3=5 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.170000 audit: BPF prog-id=192 op=LOAD Jan 22 00:34:18.170000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe55a0fe80 a2=94 a3=6 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.170000 audit: BPF prog-id=192 op=UNLOAD Jan 22 00:34:18.170000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe55a0fe80 a2=0 a3=6 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.170000 audit: BPF prog-id=193 op=LOAD Jan 22 00:34:18.170000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe55a0f630 a2=94 a3=88 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.172000 audit: BPF prog-id=194 op=LOAD Jan 22 00:34:18.172000 audit[5910]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe55a0f4b0 a2=94 a3=2 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.172000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.172000 audit: BPF prog-id=194 op=UNLOAD Jan 22 00:34:18.172000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe55a0f4e0 a2=0 a3=7ffe55a0f5e0 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.172000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.173000 audit: BPF prog-id=193 op=UNLOAD Jan 22 00:34:18.173000 audit[5910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1db59d10 a2=0 a3=c16aa464a42dff17 items=0 ppid=5682 pid=5910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.173000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:34:18.174296 kubelet[2961]: I0122 00:34:18.164166 2961 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 22 00:34:18.174296 kubelet[2961]: I0122 00:34:18.164222 2961 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sn7xg\" (UniqueName: \"kubernetes.io/projected/bbc09f02-5803-4a1d-8b41-1e543cceb488-kube-api-access-sn7xg\") on node \"localhost\" DevicePath \"\"" Jan 22 00:34:18.174296 kubelet[2961]: I0122 00:34:18.164239 2961 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbc09f02-5803-4a1d-8b41-1e543cceb488-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 22 00:34:18.264000 audit: BPF prog-id=195 op=LOAD Jan 22 00:34:18.264000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedce556d0 a2=98 a3=1999999999999999 items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.264000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.265000 audit: BPF prog-id=195 op=UNLOAD Jan 22 00:34:18.265000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffedce556a0 a3=0 items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.265000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.266000 audit: BPF prog-id=196 op=LOAD Jan 22 00:34:18.266000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedce555b0 a2=94 a3=ffff items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.266000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.267000 audit: BPF prog-id=196 op=UNLOAD Jan 22 00:34:18.267000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffedce555b0 a2=94 a3=ffff items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.267000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.270000 audit: BPF prog-id=197 op=LOAD Jan 22 00:34:18.270000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedce555f0 a2=94 a3=7ffedce557d0 items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.270000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.272000 audit: BPF prog-id=197 op=UNLOAD Jan 22 00:34:18.272000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffedce555f0 a2=94 a3=7ffedce557d0 items=0 ppid=5682 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:18.272000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:34:18.900309 systemd-networkd[1529]: vxlan.calico: Link UP Jan 22 00:34:18.900325 systemd-networkd[1529]: vxlan.calico: Gained carrier Jan 22 00:34:18.915212 systemd[1]: Removed slice kubepods-besteffort-podbbc09f02_5803_4a1d_8b41_1e543cceb488.slice - libcontainer container kubepods-besteffort-podbbc09f02_5803_4a1d_8b41_1e543cceb488.slice. Jan 22 00:34:19.063000 audit: BPF prog-id=198 op=LOAD Jan 22 00:34:19.079172 kernel: kauditd_printk_skb: 149 callbacks suppressed Jan 22 00:34:19.079306 kernel: audit: type=1334 audit(1769042059.063:726): prog-id=198 op=LOAD Jan 22 00:34:19.079349 kernel: audit: type=1300 audit(1769042059.063:726): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0bb26440 a2=98 a3=0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.079379 kernel: audit: type=1327 audit(1769042059.063:726): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.079402 kernel: audit: type=1334 audit(1769042059.073:727): prog-id=198 op=UNLOAD Jan 22 00:34:19.079425 kernel: audit: type=1300 audit(1769042059.073:727): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0bb26410 a3=0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.079751 kernel: audit: type=1327 audit(1769042059.073:727): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.079788 kernel: audit: type=1334 audit(1769042059.076:728): prog-id=199 op=LOAD Jan 22 00:34:19.080079 kernel: audit: type=1300 audit(1769042059.076:728): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0bb26250 a2=94 a3=54428f items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.080114 kernel: audit: type=1327 audit(1769042059.076:728): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.080135 kernel: audit: type=1334 audit(1769042059.077:729): prog-id=199 op=UNLOAD Jan 22 00:34:19.063000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0bb26440 a2=98 a3=0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.063000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.073000 audit: BPF prog-id=198 op=UNLOAD Jan 22 00:34:19.073000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0bb26410 a3=0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.073000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.076000 audit: BPF prog-id=199 op=LOAD Jan 22 00:34:19.076000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0bb26250 a2=94 a3=54428f items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.076000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=199 op=UNLOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0bb26250 a2=94 a3=54428f items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=200 op=LOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0bb26280 a2=94 a3=2 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=200 op=UNLOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0bb26280 a2=0 a3=2 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=201 op=LOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0bb26030 a2=94 a3=4 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=201 op=UNLOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd0bb26030 a2=94 a3=4 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=202 op=LOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0bb26130 a2=94 a3=7ffd0bb262b0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.077000 audit: BPF prog-id=202 op=UNLOAD Jan 22 00:34:19.077000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd0bb26130 a2=0 a3=7ffd0bb262b0 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.077000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.102000 audit: BPF prog-id=203 op=LOAD Jan 22 00:34:19.102000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0bb25860 a2=94 a3=2 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.102000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.103000 audit: BPF prog-id=203 op=UNLOAD Jan 22 00:34:19.103000 audit[5954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd0bb25860 a2=0 a3=2 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.103000 audit: BPF prog-id=204 op=LOAD Jan 22 00:34:19.103000 audit[5954]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd0bb25960 a2=94 a3=30 items=0 ppid=5682 pid=5954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:34:19.170000 audit[5961]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:19.170000 audit[5961]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffecffbd760 a2=0 a3=7ffecffbd74c items=0 ppid=3099 pid=5961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:19.324000 audit[5961]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:19.324000 audit[5961]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffecffbd760 a2=0 a3=0 items=0 ppid=3099 pid=5961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:19.424000 audit: BPF prog-id=205 op=LOAD Jan 22 00:34:19.424000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce9ced010 a2=98 a3=0 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.424000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.428000 audit: BPF prog-id=205 op=UNLOAD Jan 22 00:34:19.428000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce9cecfe0 a3=0 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.428000 audit: BPF prog-id=206 op=LOAD Jan 22 00:34:19.428000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce9cece00 a2=94 a3=54428f items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.428000 audit: BPF prog-id=206 op=UNLOAD Jan 22 00:34:19.428000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce9cece00 a2=94 a3=54428f items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.428000 audit: BPF prog-id=207 op=LOAD Jan 22 00:34:19.428000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce9cece30 a2=94 a3=2 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.428000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.431000 audit: BPF prog-id=207 op=UNLOAD Jan 22 00:34:19.431000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce9cece30 a2=0 a3=2 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:19.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:19.993708 systemd-networkd[1529]: vxlan.calico: Gained IPv6LL Jan 22 00:34:20.022397 kubelet[2961]: I0122 00:34:20.022084 2961 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc09f02-5803-4a1d-8b41-1e543cceb488" path="/var/lib/kubelet/pods/bbc09f02-5803-4a1d-8b41-1e543cceb488/volumes" Jan 22 00:34:20.073000 audit: BPF prog-id=208 op=LOAD Jan 22 00:34:20.073000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce9ceccf0 a2=94 a3=1 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.073000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.074000 audit: BPF prog-id=208 op=UNLOAD Jan 22 00:34:20.074000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce9ceccf0 a2=94 a3=1 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.074000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.109000 audit: BPF prog-id=209 op=LOAD Jan 22 00:34:20.109000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce9cecce0 a2=94 a3=4 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.109000 audit: BPF prog-id=209 op=UNLOAD Jan 22 00:34:20.109000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce9cecce0 a2=0 a3=4 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.109000 audit: BPF prog-id=210 op=LOAD Jan 22 00:34:20.109000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce9cecb40 a2=94 a3=5 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.109000 audit: BPF prog-id=210 op=UNLOAD Jan 22 00:34:20.109000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce9cecb40 a2=0 a3=5 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.112000 audit: BPF prog-id=211 op=LOAD Jan 22 00:34:20.112000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce9cecd60 a2=94 a3=6 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.112000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.114000 audit: BPF prog-id=211 op=UNLOAD Jan 22 00:34:20.114000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce9cecd60 a2=0 a3=6 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.114000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.116000 audit: BPF prog-id=212 op=LOAD Jan 22 00:34:20.116000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce9cec510 a2=94 a3=88 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.116000 audit: BPF prog-id=213 op=LOAD Jan 22 00:34:20.116000 audit[5963]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffce9cec390 a2=94 a3=2 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.116000 audit: BPF prog-id=213 op=UNLOAD Jan 22 00:34:20.116000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffce9cec3c0 a2=0 a3=7ffce9cec4c0 items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.116000 audit: BPF prog-id=212 op=UNLOAD Jan 22 00:34:20.116000 audit[5963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1fc27d10 a2=0 a3=557226bc51db648f items=0 ppid=5682 pid=5963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:34:20.195000 audit: BPF prog-id=204 op=UNLOAD Jan 22 00:34:20.195000 audit[5682]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000aa6300 a2=0 a3=0 items=0 ppid=5656 pid=5682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.195000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 22 00:34:20.625393 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:33310.service - OpenSSH per-connection server daemon (10.0.0.1:33310). Jan 22 00:34:20.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.25:22-10.0.0.1:33310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:20.978000 audit[5975]: USER_ACCT pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:20.987064 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 33310 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:20.987000 audit[5975]: CRED_ACQ pid=5975 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:20.987000 audit[5975]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5b67e5f0 a2=3 a3=0 items=0 ppid=1 pid=5975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:20.987000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:20.995197 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:21.101721 systemd-logind[1609]: New session 21 of user core. Jan 22 00:34:21.103000 audit[5988]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5988 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:21.103000 audit[5988]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcbfdf0590 a2=0 a3=7ffcbfdf057c items=0 ppid=5682 pid=5988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:21.103000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:21.107104 kubelet[2961]: E0122 00:34:21.106763 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:21.113248 containerd[1635]: time="2026-01-22T00:34:21.113210135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:21.116996 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 22 00:34:21.126000 audit[5989]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5989 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:21.126000 audit[5989]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffda5241d10 a2=0 a3=7ffda5241cfc items=0 ppid=5682 pid=5989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:21.126000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:21.128000 audit[5991]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=5991 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:21.128000 audit[5991]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe475e2f20 a2=0 a3=7ffe475e2f0c items=0 ppid=5682 pid=5991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:21.128000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:21.129742 containerd[1635]: time="2026-01-22T00:34:21.129531238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:21.146000 audit[5975]: USER_START pid=5975 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:21.214695 containerd[1635]: time="2026-01-22T00:34:21.209472301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:34:21.214000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:21.231000 audit[5990]: NETFILTER_CFG table=filter:128 family=2 entries=81 op=nft_register_chain pid=5990 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:21.231000 audit[5990]: SYSCALL arch=c000003e syscall=46 success=yes exit=44276 a0=3 a1=7ffdc5698df0 a2=0 a3=7ffdc5698ddc items=0 ppid=5682 pid=5990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:21.231000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:21.894734 containerd[1635]: time="2026-01-22T00:34:21.893784364Z" level=info msg="StopPodSandbox for \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\"" Jan 22 00:34:22.031059 sshd[5996]: Connection closed by 10.0.0.1 port 33310 Jan 22 00:34:22.032693 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:22.042000 audit[5975]: USER_END pid=5975 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:22.043000 audit[5975]: CRED_DISP pid=5975 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:22.057137 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:33310.service: Deactivated successfully. Jan 22 00:34:22.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.25:22-10.0.0.1:33310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:22.076790 systemd[1]: session-21.scope: Deactivated successfully. Jan 22 00:34:22.094036 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Jan 22 00:34:22.101240 systemd-logind[1609]: Removed session 21. Jan 22 00:34:23.073565 containerd[1635]: time="2026-01-22T00:34:23.071366120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:23.094152 containerd[1635]: time="2026-01-22T00:34:23.090176136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,}" Jan 22 00:34:23.316068 systemd-networkd[1529]: cali9cdcd5011fb: Link UP Jan 22 00:34:23.325229 systemd-networkd[1529]: cali9cdcd5011fb: Gained carrier Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.112 [INFO][6018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zlbdn-eth0 coredns-66bc5c9577- kube-system 1fc2a2d5-dab7-482f-b368-90c3db40ee93 1072 0 2026-01-22 00:31:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zlbdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9cdcd5011fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.112 [INFO][6018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.342 [INFO][6098] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" HandleID="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Workload="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.344 [INFO][6098] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" HandleID="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Workload="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zlbdn", "timestamp":"2026-01-22 00:34:22.342698112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.344 [INFO][6098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.344 [INFO][6098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.344 [INFO][6098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.763 [INFO][6098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.883 [INFO][6098] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.917 [INFO][6098] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.937 [INFO][6098] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.987 [INFO][6098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:22.993 [INFO][6098] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.023 [INFO][6098] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.076 [INFO][6098] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.161 [INFO][6098] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.189 [INFO][6098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" host="localhost" Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.192 [INFO][6098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:23.546194 containerd[1635]: 2026-01-22 00:34:23.192 [INFO][6098] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" HandleID="k8s-pod-network.a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Workload="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.553081 containerd[1635]: 2026-01-22 00:34:23.233 [INFO][6018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zlbdn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fc2a2d5-dab7-482f-b368-90c3db40ee93", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zlbdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cdcd5011fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:23.553081 containerd[1635]: 2026-01-22 00:34:23.233 [INFO][6018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.553081 containerd[1635]: 2026-01-22 00:34:23.233 [INFO][6018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cdcd5011fb ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.553081 containerd[1635]: 2026-01-22 00:34:23.328 [INFO][6018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.553541 containerd[1635]: 2026-01-22 00:34:23.365 [INFO][6018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zlbdn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fc2a2d5-dab7-482f-b368-90c3db40ee93", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab", Pod:"coredns-66bc5c9577-zlbdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cdcd5011fb", MAC:"ba:2e:c2:b4:ec:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:23.553541 containerd[1635]: 2026-01-22 00:34:23.423 [INFO][6018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" Namespace="kube-system" Pod="coredns-66bc5c9577-zlbdn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zlbdn-eth0" Jan 22 00:34:23.895202 systemd-networkd[1529]: calie588ef0400d: Link UP Jan 22 00:34:23.904219 systemd-networkd[1529]: calie588ef0400d: Gained carrier Jan 22 00:34:24.009436 containerd[1635]: time="2026-01-22T00:34:24.009379849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:22.095 [INFO][6017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0 calico-apiserver-79ff4d8844- calico-apiserver e1d20bb6-82c3-4af1-9823-e27799a9a91a 1069 0 2026-01-22 00:32:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79ff4d8844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79ff4d8844-vkmvl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie588ef0400d [] [] }} ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:22.096 [INFO][6017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:22.415 [INFO][6091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" HandleID="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Workload="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:22.425 [INFO][6091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" HandleID="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Workload="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000201550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79ff4d8844-vkmvl", "timestamp":"2026-01-22 00:34:22.415729454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:22.425 [INFO][6091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.190 [INFO][6091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.193 [INFO][6091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.284 [INFO][6091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.370 [INFO][6091] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.434 [INFO][6091] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.504 [INFO][6091] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.535 [INFO][6091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.536 [INFO][6091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.578 [INFO][6091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1 Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.639 [INFO][6091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.749 [INFO][6091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.764 [INFO][6091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" host="localhost" Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.764 [INFO][6091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:24.136186 containerd[1635]: 2026-01-22 00:34:23.764 [INFO][6091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" HandleID="k8s-pod-network.da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Workload="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:23.819 [INFO][6017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0", GenerateName:"calico-apiserver-79ff4d8844-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1d20bb6-82c3-4af1-9823-e27799a9a91a", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79ff4d8844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79ff4d8844-vkmvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie588ef0400d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:23.820 [INFO][6017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:23.820 [INFO][6017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie588ef0400d ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:23.918 [INFO][6017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:23.920 [INFO][6017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0", GenerateName:"calico-apiserver-79ff4d8844-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1d20bb6-82c3-4af1-9823-e27799a9a91a", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79ff4d8844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1", Pod:"calico-apiserver-79ff4d8844-vkmvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie588ef0400d", MAC:"0e:1c:4e:44:19:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:24.159538 containerd[1635]: 2026-01-22 00:34:24.097 [INFO][6017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-vkmvl" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--vkmvl-eth0" Jan 22 00:34:24.191348 containerd[1635]: time="2026-01-22T00:34:24.191195551Z" level=info msg="connecting to shim a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab" address="unix:///run/containerd/s/189d271e8384eb510098b7efd94ace6c4c68883cecf4b4ce8b187707b2d0cf79" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:24.280000 audit[6160]: NETFILTER_CFG table=filter:129 family=2 entries=42 op=nft_register_chain pid=6160 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:24.295266 kernel: kauditd_printk_skb: 115 callbacks suppressed Jan 22 00:34:24.295555 kernel: audit: type=1325 audit(1769042064.280:773): table=filter:129 family=2 entries=42 op=nft_register_chain pid=6160 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:24.280000 audit[6160]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7fffa51fdf80 a2=0 a3=7fffa51fdf6c items=0 ppid=5682 pid=6160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.387207 kernel: audit: type=1300 audit(1769042064.280:773): arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7fffa51fdf80 a2=0 a3=7fffa51fdf6c items=0 ppid=5682 pid=6160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.387328 kernel: audit: type=1327 audit(1769042064.280:773): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:24.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:24.490204 systemd-networkd[1529]: cali9cdcd5011fb: Gained IPv6LL Jan 22 00:34:24.706691 systemd-networkd[1529]: calib6724469d17: Link UP Jan 22 00:34:24.739992 systemd-networkd[1529]: calib6724469d17: Gained carrier Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:22.777 [WARNING][6075] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:22.777 [INFO][6075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:22.777 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" iface="eth0" netns="" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:22.777 [INFO][6075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:22.777 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:23.219 [INFO][6113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:23.219 [INFO][6113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:24.678 [INFO][6113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:24.767 [WARNING][6113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:24.767 [INFO][6113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:24.785 [INFO][6113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:24.883287 containerd[1635]: 2026-01-22 00:34:24.821 [INFO][6075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:24.883287 containerd[1635]: time="2026-01-22T00:34:24.882533082Z" level=info msg="TearDown network for sandbox \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" successfully" Jan 22 00:34:24.883287 containerd[1635]: time="2026-01-22T00:34:24.882686168Z" level=info msg="StopPodSandbox for \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" returns successfully" Jan 22 00:34:24.886953 kernel: audit: type=1325 audit(1769042064.869:774): table=filter:130 family=2 entries=41 op=nft_register_chain pid=6226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:24.869000 audit[6226]: NETFILTER_CFG table=filter:130 family=2 entries=41 op=nft_register_chain pid=6226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:24.869000 audit[6226]: SYSCALL arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffd2138b920 a2=0 a3=7ffd2138b90c items=0 ppid=5682 pid=6226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.982043 kernel: audit: type=1300 audit(1769042064.869:774): arch=c000003e syscall=46 success=yes exit=23076 a0=3 a1=7ffd2138b920 a2=0 a3=7ffd2138b90c items=0 ppid=5682 pid=6226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:24.869000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:24.991263 kernel: audit: type=1327 audit(1769042064.869:774): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:25.012992 containerd[1635]: time="2026-01-22T00:34:25.005523056Z" level=info msg="RemovePodSandbox for \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\"" Jan 22 00:34:25.012992 containerd[1635]: time="2026-01-22T00:34:25.011040945Z" level=info msg="Forcibly stopping sandbox \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\"" Jan 22 00:34:25.053369 containerd[1635]: time="2026-01-22T00:34:25.053027531Z" level=info msg="connecting to shim da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1" address="unix:///run/containerd/s/91f1e1dd3b26a6e40145176f1669edb60ffb7f1f2a4a60b33932b7b6df05ddd7" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:21.950 [INFO][6000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--956jd-eth0 goldmane-7c778bb748- calico-system 6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2 1055 0 2026-01-22 00:32:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-956jd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib6724469d17 [] [] }} ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:21.985 [INFO][6000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:22.449 [INFO][6077] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" HandleID="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Workload="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:22.450 [INFO][6077] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" HandleID="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Workload="localhost-k8s-goldmane--7c778bb748--956jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-956jd", "timestamp":"2026-01-22 00:34:22.449291452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:22.450 [INFO][6077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:23.771 [INFO][6077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:23.771 [INFO][6077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:23.875 [INFO][6077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.012 [INFO][6077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.234 [INFO][6077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.255 [INFO][6077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.322 [INFO][6077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.322 [INFO][6077] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.388 [INFO][6077] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.442 [INFO][6077] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.637 [INFO][6077] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.638 [INFO][6077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" host="localhost" Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.642 [INFO][6077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:25.107974 containerd[1635]: 2026-01-22 00:34:24.642 [INFO][6077] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" HandleID="k8s-pod-network.3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Workload="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.679 [INFO][6000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--956jd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-956jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6724469d17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.680 [INFO][6000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.680 [INFO][6000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6724469d17 ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.795 [INFO][6000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.797 [INFO][6000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--956jd-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b", Pod:"goldmane-7c778bb748-956jd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6724469d17", MAC:"42:b1:09:b4:5e:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:25.208080 containerd[1635]: 2026-01-22 00:34:24.889 [INFO][6000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" Namespace="calico-system" Pod="goldmane-7c778bb748-956jd" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--956jd-eth0" Jan 22 00:34:25.626026 systemd[1]: Started cri-containerd-a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab.scope - libcontainer container a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab. Jan 22 00:34:25.646000 audit[6276]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=6276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:25.674188 kernel: audit: type=1325 audit(1769042065.646:775): table=filter:131 family=2 entries=20 op=nft_register_rule pid=6276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:25.724334 kernel: audit: type=1300 audit(1769042065.646:775): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7d0b7000 a2=0 a3=7fff7d0b6fec items=0 ppid=3099 pid=6276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.646000 audit[6276]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7d0b7000 a2=0 a3=7fff7d0b6fec items=0 ppid=3099 pid=6276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.764096 kernel: audit: type=1327 audit(1769042065.646:775): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:25.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:25.817087 kernel: audit: type=1325 audit(1769042065.679:776): table=filter:132 family=2 entries=58 op=nft_register_chain pid=6266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:25.679000 audit[6266]: NETFILTER_CFG table=filter:132 family=2 entries=58 op=nft_register_chain pid=6266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:25.679000 audit[6266]: SYSCALL arch=c000003e syscall=46 success=yes exit=30532 a0=3 a1=7ffec810c4c0 a2=0 a3=7ffec810c4ac items=0 ppid=5682 pid=6266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.679000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:25.716000 audit[6276]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=6276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:25.716000 audit[6276]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff7d0b7000 a2=0 a3=0 items=0 ppid=3099 pid=6276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:25.865000 audit: BPF prog-id=214 op=LOAD Jan 22 00:34:25.868000 audit: BPF prog-id=215 op=LOAD Jan 22 00:34:25.868000 audit[6223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.868000 audit: BPF prog-id=215 op=UNLOAD Jan 22 00:34:25.868000 audit[6223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.868000 audit: BPF prog-id=216 op=LOAD Jan 22 00:34:25.868000 audit[6223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.872000 audit: BPF prog-id=217 op=LOAD Jan 22 00:34:25.872000 audit[6223]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.872000 audit: BPF prog-id=217 op=UNLOAD Jan 22 00:34:25.872000 audit[6223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.872000 audit: BPF prog-id=216 op=UNLOAD Jan 22 00:34:25.872000 audit[6223]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.872000 audit: BPF prog-id=218 op=LOAD Jan 22 00:34:25.872000 audit[6223]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6188 pid=6223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:25.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133306464663339636130346436363163613339326363356439336164 Jan 22 00:34:25.880151 systemd-networkd[1529]: calie588ef0400d: Gained IPv6LL Jan 22 00:34:25.887467 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:25.900707 systemd[1]: Started cri-containerd-da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1.scope - libcontainer container da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1. Jan 22 00:34:26.030045 kubelet[2961]: E0122 00:34:26.029956 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:26.066104 systemd-networkd[1529]: calidb24dd52cbd: Link UP Jan 22 00:34:26.071402 systemd-networkd[1529]: calidb24dd52cbd: Gained carrier Jan 22 00:34:26.311042 containerd[1635]: time="2026-01-22T00:34:26.309420513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,}" Jan 22 00:34:26.429135 containerd[1635]: time="2026-01-22T00:34:26.428256172Z" level=info msg="connecting to shim 3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b" address="unix:///run/containerd/s/43c30c539e3dfe66df810f9f5488d47a7ae114c2f8e81206c5b9a9138e6e87b3" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:26.516300 systemd-networkd[1529]: calib6724469d17: Gained IPv6LL Jan 22 00:34:26.575000 audit: BPF prog-id=219 op=LOAD Jan 22 00:34:26.586000 audit: BPF prog-id=220 op=LOAD Jan 22 00:34:26.586000 audit[6279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.586000 audit: BPF prog-id=220 op=UNLOAD Jan 22 00:34:26.586000 audit[6279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.596000 audit: BPF prog-id=221 op=LOAD Jan 22 00:34:26.596000 audit[6279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.598000 audit: BPF prog-id=222 op=LOAD Jan 22 00:34:26.598000 audit[6279]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.600000 audit: BPF prog-id=222 op=UNLOAD Jan 22 00:34:26.600000 audit[6279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.601000 audit: BPF prog-id=221 op=UNLOAD Jan 22 00:34:26.601000 audit[6279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.601000 audit: BPF prog-id=223 op=LOAD Jan 22 00:34:26.601000 audit[6279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=6234 pid=6279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:26.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461323532363663303635373739626434653165623662343635663764 Jan 22 00:34:26.618187 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:23.892 [INFO][6123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kfg9f-eth0 csi-node-driver- calico-system d3f33826-c9a7-4e28-a985-814cedd1e52b 881 0 2026-01-22 00:32:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kfg9f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidb24dd52cbd [] [] }} ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:23.895 [INFO][6123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.796 [INFO][6170] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" HandleID="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Workload="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.813 [INFO][6170] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" HandleID="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Workload="localhost-k8s-csi--node--driver--kfg9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kfg9f", "timestamp":"2026-01-22 00:34:24.796714493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.813 [INFO][6170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.813 [INFO][6170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.813 [INFO][6170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:24.898 [INFO][6170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.024 [INFO][6170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.324 [INFO][6170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.340 [INFO][6170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.586 [INFO][6170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.587 [INFO][6170] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.651 [INFO][6170] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555 Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.819 [INFO][6170] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.909 [INFO][6170] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.910 [INFO][6170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" host="localhost" Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.910 [INFO][6170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:26.788996 containerd[1635]: 2026-01-22 00:34:25.910 [INFO][6170] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" HandleID="k8s-pod-network.bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Workload="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:25.962 [INFO][6123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfg9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3f33826-c9a7-4e28-a985-814cedd1e52b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kfg9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb24dd52cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:25.962 [INFO][6123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:25.962 [INFO][6123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb24dd52cbd ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:26.380 [INFO][6123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:26.473 [INFO][6123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfg9f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3f33826-c9a7-4e28-a985-814cedd1e52b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555", Pod:"csi-node-driver-kfg9f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidb24dd52cbd", MAC:"a2:aa:4e:e4:91:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:26.790264 containerd[1635]: 2026-01-22 00:34:26.729 [INFO][6123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" Namespace="calico-system" Pod="csi-node-driver-kfg9f" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfg9f-eth0" Jan 22 00:34:26.914440 systemd[1]: Started cri-containerd-3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b.scope - libcontainer container 3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b. Jan 22 00:34:27.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.25:22-10.0.0.1:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:27.110738 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:35850.service - OpenSSH per-connection server daemon (10.0.0.1:35850). Jan 22 00:34:27.590894 containerd[1635]: time="2026-01-22T00:34:27.587376649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlbdn,Uid:1fc2a2d5-dab7-482f-b368-90c3db40ee93,Namespace:kube-system,Attempt:0,} returns sandbox id \"a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab\"" Jan 22 00:34:27.614984 kubelet[2961]: E0122 00:34:27.613558 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:27.738074 containerd[1635]: time="2026-01-22T00:34:27.693245100Z" level=info msg="connecting to shim bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555" address="unix:///run/containerd/s/9824c9292ba8189850790560175890ec23f9c9d34bd7c84ef1c4cc4b9bfeb425" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:27.869000 audit[6390]: USER_ACCT pid=6390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:27.877000 audit: BPF prog-id=224 op=LOAD Jan 22 00:34:27.879000 audit[6390]: CRED_ACQ pid=6390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:27.880160 sshd[6390]: Accepted publickey for core from 10.0.0.1 port 35850 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:27.880000 audit[6390]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc86c94370 a2=3 a3=0 items=0 ppid=1 pid=6390 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:27.880000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:27.885572 sshd-session[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:27.887000 audit: BPF prog-id=225 op=LOAD Jan 22 00:34:27.887000 audit[6359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:27.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:27.887000 audit: BPF prog-id=225 op=UNLOAD Jan 22 00:34:27.887000 audit[6359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:27.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:27.989726 containerd[1635]: time="2026-01-22T00:34:27.887319223Z" level=info msg="CreateContainer within sandbox \"a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:34:27.980000 audit: BPF prog-id=226 op=LOAD Jan 22 00:34:27.980000 audit[6359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:27.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:27.997000 audit: BPF prog-id=227 op=LOAD Jan 22 00:34:27.997000 audit[6359]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:27.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:28.001000 audit: BPF prog-id=227 op=UNLOAD Jan 22 00:34:28.001000 audit[6359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:28.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:28.001000 audit: BPF prog-id=226 op=UNLOAD Jan 22 00:34:28.001000 audit[6359]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:28.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:28.051211 systemd-logind[1609]: New session 22 of user core. Jan 22 00:34:28.066464 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 22 00:34:28.078000 audit[6390]: USER_START pid=6390 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:28.101000 audit[6423]: CRED_ACQ pid=6423 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:28.116488 systemd-networkd[1529]: calidb24dd52cbd: Gained IPv6LL Jan 22 00:34:28.001000 audit: BPF prog-id=228 op=LOAD Jan 22 00:34:28.001000 audit[6359]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=6327 pid=6359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:28.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313863646465633262326130363636356431356432633430613263 Jan 22 00:34:28.169694 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:28.209550 systemd-networkd[1529]: cali4a49dd23676: Link UP Jan 22 00:34:28.218007 systemd-networkd[1529]: cali4a49dd23676: Gained carrier Jan 22 00:34:28.507086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800805301.mount: Deactivated successfully. Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:24.225 [INFO][6124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0 calico-kube-controllers-6fd67fc48- calico-system cddc50d2-bbfe-4bdb-8697-ec1251db07b4 1075 0 2026-01-22 00:32:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fd67fc48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fd67fc48-sbqdz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a49dd23676 [] [] }} ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:24.261 [INFO][6124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.322 [INFO][6212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" HandleID="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Workload="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.327 [INFO][6212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" HandleID="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Workload="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fd67fc48-sbqdz", "timestamp":"2026-01-22 00:34:26.322732196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.328 [INFO][6212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.328 [INFO][6212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.329 [INFO][6212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.378 [INFO][6212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.718 [INFO][6212] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.813 [INFO][6212] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.859 [INFO][6212] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.886 [INFO][6212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.887 [INFO][6212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:26.897 [INFO][6212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68 Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:27.055 [INFO][6212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:27.245 [INFO][6212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:27.566 [INFO][6212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" host="localhost" Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:27.695 [INFO][6212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:28.569116 containerd[1635]: 2026-01-22 00:34:27.696 [INFO][6212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" HandleID="k8s-pod-network.a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Workload="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:27.824 [INFO][6124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0", GenerateName:"calico-kube-controllers-6fd67fc48-", Namespace:"calico-system", SelfLink:"", UID:"cddc50d2-bbfe-4bdb-8697-ec1251db07b4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd67fc48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fd67fc48-sbqdz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a49dd23676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:27.825 [INFO][6124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:27.825 [INFO][6124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a49dd23676 ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:28.224 [INFO][6124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:28.239 [INFO][6124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0", GenerateName:"calico-kube-controllers-6fd67fc48-", Namespace:"calico-system", SelfLink:"", UID:"cddc50d2-bbfe-4bdb-8697-ec1251db07b4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd67fc48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68", Pod:"calico-kube-controllers-6fd67fc48-sbqdz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a49dd23676", MAC:"a6:22:bb:18:a4:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:28.572608 containerd[1635]: 2026-01-22 00:34:28.422 [INFO][6124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" Namespace="calico-system" Pod="calico-kube-controllers-6fd67fc48-sbqdz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fd67fc48--sbqdz-eth0" Jan 22 00:34:28.653955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800649417.mount: Deactivated successfully. Jan 22 00:34:28.712130 containerd[1635]: time="2026-01-22T00:34:28.712063370Z" level=info msg="Container 198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:28.763293 containerd[1635]: time="2026-01-22T00:34:28.762176460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-vkmvl,Uid:e1d20bb6-82c3-4af1-9823-e27799a9a91a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da25266c065779bd4e1eb6b465f7d15a0262a28f399b477b723265c2d94e9de1\"" Jan 22 00:34:28.799276 containerd[1635]: time="2026-01-22T00:34:28.796426409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:28.866426 containerd[1635]: time="2026-01-22T00:34:28.866375499Z" level=info msg="CreateContainer within sandbox \"a30ddf39ca04d661ca392cc5d93ad9c54f8ba6b31f728bfbcaff529441405aab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4\"" Jan 22 00:34:28.883158 containerd[1635]: time="2026-01-22T00:34:28.883104254Z" level=info msg="StartContainer for \"198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4\"" Jan 22 00:34:28.887377 systemd[1]: Started cri-containerd-bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555.scope - libcontainer container bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555. Jan 22 00:34:28.918000 audit[6467]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=6467 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:28.918000 audit[6467]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffd19a32d00 a2=0 a3=7ffd19a32cec items=0 ppid=5682 pid=6467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:28.918000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:28.936115 containerd[1635]: time="2026-01-22T00:34:28.936059822Z" level=info msg="connecting to shim 198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4" address="unix:///run/containerd/s/189d271e8384eb510098b7efd94ace6c4c68883cecf4b4ce8b187707b2d0cf79" protocol=ttrpc version=3 Jan 22 00:34:28.977968 containerd[1635]: time="2026-01-22T00:34:28.976160626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:28.996755 containerd[1635]: time="2026-01-22T00:34:28.987196240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:28.996755 containerd[1635]: time="2026-01-22T00:34:28.987330049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:29.001060 kubelet[2961]: E0122 00:34:28.992982 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:29.001060 kubelet[2961]: E0122 00:34:28.993039 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:29.001060 kubelet[2961]: E0122 00:34:28.993253 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:29.001060 kubelet[2961]: E0122 00:34:28.993297 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:34:29.028047 containerd[1635]: time="2026-01-22T00:34:29.026506633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:29.104554 sshd[6423]: Connection closed by 10.0.0.1 port 35850 Jan 22 00:34:29.105321 sshd-session[6390]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:29.110000 audit[6390]: USER_END pid=6390 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:29.113000 audit[6390]: CRED_DISP pid=6390 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:29.124950 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:35850.service: Deactivated successfully. Jan 22 00:34:29.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.25:22-10.0.0.1:35850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:29.134430 systemd[1]: session-22.scope: Deactivated successfully. Jan 22 00:34:29.149381 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Jan 22 00:34:29.159586 systemd-logind[1609]: Removed session 22. Jan 22 00:34:29.192198 containerd[1635]: time="2026-01-22T00:34:29.190733014Z" level=info msg="connecting to shim a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68" address="unix:///run/containerd/s/36fad74a60159fc983c1ef255230c66bc0041ac5051e23469c4fae2ba6d26e9c" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:29.212042 systemd-networkd[1529]: calic107cca8a16: Link UP Jan 22 00:34:29.256307 systemd-networkd[1529]: calic107cca8a16: Gained carrier Jan 22 00:34:29.271391 systemd-networkd[1529]: cali4a49dd23676: Gained IPv6LL Jan 22 00:34:29.275000 audit: BPF prog-id=229 op=LOAD Jan 22 00:34:29.299480 kernel: kauditd_printk_skb: 86 callbacks suppressed Jan 22 00:34:29.299722 kernel: audit: type=1334 audit(1769042069.284:813): prog-id=230 op=LOAD Jan 22 00:34:29.284000 audit: BPF prog-id=230 op=LOAD Jan 22 00:34:29.315330 kernel: audit: type=1300 audit(1769042069.284:813): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.315482 kernel: audit: type=1327 audit(1769042069.284:813): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.315524 kernel: audit: type=1334 audit(1769042069.299:814): prog-id=230 op=UNLOAD Jan 22 00:34:29.315561 kernel: audit: type=1300 audit(1769042069.299:814): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.316402 kernel: audit: type=1327 audit(1769042069.299:814): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.316447 kernel: audit: type=1334 audit(1769042069.299:815): prog-id=231 op=LOAD Jan 22 00:34:29.316474 kernel: audit: type=1300 audit(1769042069.299:815): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.316512 kernel: audit: type=1327 audit(1769042069.299:815): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.316539 kernel: audit: type=1334 audit(1769042069.299:816): prog-id=232 op=LOAD Jan 22 00:34:29.284000 audit[6446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=230 op=UNLOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=231 op=LOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=232 op=LOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=232 op=UNLOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=231 op=UNLOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.299000 audit: BPF prog-id=233 op=LOAD Jan 22 00:34:29.299000 audit[6446]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6407 pid=6446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261643930353464326534626331613266623363643536353865656265 Jan 22 00:34:29.379310 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:29.490000 audit[6515]: NETFILTER_CFG table=filter:135 family=2 entries=52 op=nft_register_chain pid=6515 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:29.490000 audit[6515]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffddab29ec0 a2=0 a3=7ffddab29eac items=0 ppid=5682 pid=6515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.490000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:25.560 [INFO][6173] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0 calico-apiserver-79ff4d8844- calico-apiserver cc699319-6548-46e3-b846-fb40b8bdda3a 1066 0 2026-01-22 00:32:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79ff4d8844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79ff4d8844-xf4gm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic107cca8a16 [] [] }} ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:25.634 [INFO][6173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:26.771 [INFO][6289] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" HandleID="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Workload="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:26.774 [INFO][6289] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" HandleID="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Workload="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f2f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79ff4d8844-xf4gm", "timestamp":"2026-01-22 00:34:26.771308104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:26.774 [INFO][6289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:27.625 [INFO][6289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:27.625 [INFO][6289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.435 [INFO][6289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.621 [INFO][6289] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.818 [INFO][6289] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.866 [INFO][6289] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.923 [INFO][6289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.928 [INFO][6289] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:28.966 [INFO][6289] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976 Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:29.073 [INFO][6289] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:29.109 [INFO][6289] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:29.111 [INFO][6289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" host="localhost" Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:29.111 [INFO][6289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:29.504459 containerd[1635]: 2026-01-22 00:34:29.111 [INFO][6289] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" HandleID="k8s-pod-network.a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Workload="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.148 [INFO][6173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0", GenerateName:"calico-apiserver-79ff4d8844-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc699319-6548-46e3-b846-fb40b8bdda3a", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79ff4d8844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79ff4d8844-xf4gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic107cca8a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.148 [INFO][6173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.148 [INFO][6173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic107cca8a16 ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.256 [INFO][6173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.264 [INFO][6173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0", GenerateName:"calico-apiserver-79ff4d8844-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc699319-6548-46e3-b846-fb40b8bdda3a", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79ff4d8844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976", Pod:"calico-apiserver-79ff4d8844-xf4gm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic107cca8a16", MAC:"1e:72:b9:5d:75:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:29.507561 containerd[1635]: 2026-01-22 00:34:29.335 [INFO][6173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" Namespace="calico-apiserver" Pod="calico-apiserver-79ff4d8844-xf4gm" WorkloadEndpoint="localhost-k8s-calico--apiserver--79ff4d8844--xf4gm-eth0" Jan 22 00:34:29.619145 containerd[1635]: time="2026-01-22T00:34:29.611553687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:27.132 [WARNING][6287] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" WorkloadEndpoint="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:27.132 [INFO][6287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:27.132 [INFO][6287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" iface="eth0" netns="" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:27.132 [INFO][6287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:27.132 [INFO][6287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:28.512 [INFO][6396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:28.512 [INFO][6396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:29.113 [INFO][6396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:29.302 [WARNING][6396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:29.302 [INFO][6396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" HandleID="k8s-pod-network.ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Workload="localhost-k8s-whisker--5df8f6b8cf--s2hhm-eth0" Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:29.344 [INFO][6396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:29.619145 containerd[1635]: 2026-01-22 00:34:29.455 [INFO][6287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb" Jan 22 00:34:29.619145 containerd[1635]: time="2026-01-22T00:34:29.612782951Z" level=info msg="TearDown network for sandbox \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" successfully" Jan 22 00:34:29.620610 systemd[1]: Started cri-containerd-198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4.scope - libcontainer container 198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4. Jan 22 00:34:29.700151 containerd[1635]: time="2026-01-22T00:34:29.698106115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:29.700346 kubelet[2961]: E0122 00:34:29.699605 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:29.700346 kubelet[2961]: E0122 00:34:29.700109 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:29.700346 kubelet[2961]: E0122 00:34:29.700204 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:29.700346 kubelet[2961]: E0122 00:34:29.700250 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:29.723531 containerd[1635]: time="2026-01-22T00:34:29.719996065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:29.866550 containerd[1635]: time="2026-01-22T00:34:29.866480796Z" level=info msg="Ensure that sandbox ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb in task-service has been cleanup successfully" Jan 22 00:34:29.914000 audit: BPF prog-id=234 op=LOAD Jan 22 00:34:29.920000 audit: BPF prog-id=235 op=LOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=235 op=UNLOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=236 op=LOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=237 op=LOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=237 op=UNLOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=236 op=UNLOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:29.920000 audit: BPF prog-id=238 op=LOAD Jan 22 00:34:29.920000 audit[6477]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=6188 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:29.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139386136336439353666376535393731653764623936613932383165 Jan 22 00:34:30.008000 audit[6542]: NETFILTER_CFG table=filter:136 family=2 entries=57 op=nft_register_chain pid=6542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:30.008000 audit[6542]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffd7f9d4000 a2=0 a3=7ffd7f9d3fec items=0 ppid=5682 pid=6542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:30.025114 kubelet[2961]: E0122 00:34:30.022357 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:34:30.036700 containerd[1635]: time="2026-01-22T00:34:30.036374272Z" level=info msg="RemovePodSandbox \"ffc5d4bb972d5442ab6e99ec55894fed217bf39f9f8192fdb78855275761f7eb\" returns successfully" Jan 22 00:34:30.205561 containerd[1635]: time="2026-01-22T00:34:30.197064610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-956jd,Uid:6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3818cddec2b2a06665d15d2c40a2cf6f7337f331fcac36f6b0fa6af3e07e5d4b\"" Jan 22 00:34:30.284402 containerd[1635]: time="2026-01-22T00:34:30.269009855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfg9f,Uid:d3f33826-c9a7-4e28-a985-814cedd1e52b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bad9054d2e4bc1a2fb3cd5658eebeb1b5e0c88ef12564ff21c5a6acbea70d555\"" Jan 22 00:34:30.280246 systemd[1]: Started cri-containerd-a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68.scope - libcontainer container a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68. Jan 22 00:34:30.308143 containerd[1635]: time="2026-01-22T00:34:30.308089606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:34:30.402000 audit[6572]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=6572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:30.402000 audit[6572]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc0eb48b0 a2=0 a3=7ffdc0eb489c items=0 ppid=3099 pid=6572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:30.466000 audit[6572]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=6572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:30.466000 audit[6572]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc0eb48b0 a2=0 a3=0 items=0 ppid=3099 pid=6572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:30.487727 containerd[1635]: time="2026-01-22T00:34:30.487587246Z" level=info msg="connecting to shim a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976" address="unix:///run/containerd/s/e1f81c005b0a958d625de99f8a58d89cf6e5c329d3c8c0d17dcc9e859f7cc5a9" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:30.531583 containerd[1635]: time="2026-01-22T00:34:30.531359766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:30.543253 containerd[1635]: time="2026-01-22T00:34:30.543110038Z" level=info msg="StartContainer for \"198a63d956f7e5971e7db96a9281e911d95f10d7924fa5e04a35bce81d4b0fd4\" returns successfully" Jan 22 00:34:30.550617 containerd[1635]: time="2026-01-22T00:34:30.549195829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:34:30.550617 containerd[1635]: time="2026-01-22T00:34:30.550152300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:30.551060 kubelet[2961]: E0122 00:34:30.550450 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:34:30.551060 kubelet[2961]: E0122 00:34:30.550494 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:34:30.551382 kubelet[2961]: E0122 00:34:30.551267 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:30.551382 kubelet[2961]: E0122 00:34:30.551316 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:30.554534 containerd[1635]: time="2026-01-22T00:34:30.554187972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:34:30.597000 audit: BPF prog-id=239 op=LOAD Jan 22 00:34:30.600000 audit: BPF prog-id=240 op=LOAD Jan 22 00:34:30.600000 audit[6520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.600000 audit: BPF prog-id=240 op=UNLOAD Jan 22 00:34:30.600000 audit[6520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.603000 audit: BPF prog-id=241 op=LOAD Jan 22 00:34:30.603000 audit[6520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.603000 audit: BPF prog-id=242 op=LOAD Jan 22 00:34:30.603000 audit[6520]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.605000 audit: BPF prog-id=242 op=UNLOAD Jan 22 00:34:30.605000 audit[6520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.605000 audit: BPF prog-id=241 op=UNLOAD Jan 22 00:34:30.605000 audit[6520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.605000 audit: BPF prog-id=243 op=LOAD Jan 22 00:34:30.605000 audit[6520]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=6494 pid=6520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:30.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130386663316530616434656131613237383564303063366435303065 Jan 22 00:34:30.631483 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:30.696070 containerd[1635]: time="2026-01-22T00:34:30.694998704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:30.739083 containerd[1635]: time="2026-01-22T00:34:30.731056403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:34:30.739083 containerd[1635]: time="2026-01-22T00:34:30.731211551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:30.740021 kubelet[2961]: E0122 00:34:30.732220 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:34:30.740021 kubelet[2961]: E0122 00:34:30.732265 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:34:30.740021 kubelet[2961]: E0122 00:34:30.732330 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:30.765420 containerd[1635]: time="2026-01-22T00:34:30.765362999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:34:30.871300 systemd-networkd[1529]: calic107cca8a16: Gained IPv6LL Jan 22 00:34:30.914329 containerd[1635]: time="2026-01-22T00:34:30.908382357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:30.959974 containerd[1635]: time="2026-01-22T00:34:30.954953436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:34:30.959974 containerd[1635]: time="2026-01-22T00:34:30.956019618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:30.967500 kubelet[2961]: E0122 00:34:30.964759 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:34:30.967500 kubelet[2961]: E0122 00:34:30.966995 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:34:30.967500 kubelet[2961]: E0122 00:34:30.967086 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:30.967500 kubelet[2961]: E0122 00:34:30.967130 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:34:30.976464 containerd[1635]: time="2026-01-22T00:34:30.974461558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd67fc48-sbqdz,Uid:cddc50d2-bbfe-4bdb-8697-ec1251db07b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a08fc1e0ad4ea1a2785d00c6d500ed4bbc9eaeddd76700a4857708b768192b68\"" Jan 22 00:34:30.996205 systemd[1]: Started cri-containerd-a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976.scope - libcontainer container a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976. Jan 22 00:34:31.022336 containerd[1635]: time="2026-01-22T00:34:31.022293579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:34:31.121121 systemd-networkd[1529]: calie057b862e53: Link UP Jan 22 00:34:31.155737 kubelet[2961]: E0122 00:34:31.155580 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:31.162525 systemd-networkd[1529]: calie057b862e53: Gained carrier Jan 22 00:34:31.199448 kubelet[2961]: E0122 00:34:31.185156 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:31.342551 kubelet[2961]: E0122 00:34:31.341354 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:34:31.353290 kubelet[2961]: E0122 00:34:31.351340 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:34:31.396084 containerd[1635]: time="2026-01-22T00:34:31.395461864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:31.426978 containerd[1635]: time="2026-01-22T00:34:31.426494151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:34:31.428213 containerd[1635]: time="2026-01-22T00:34:31.428056091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:31.431074 kubelet[2961]: E0122 00:34:31.429365 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:34:31.431477 kubelet[2961]: E0122 00:34:31.431444 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:34:31.434168 kubelet[2961]: E0122 00:34:31.434049 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:31.434322 kubelet[2961]: E0122 00:34:31.434283 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:28.478 [INFO][6329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zqk9r-eth0 coredns-66bc5c9577- kube-system 2cb94888-9f16-48f2-8fc7-64c6889ef0fc 1068 0 2026-01-22 00:31:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zqk9r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie057b862e53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:28.479 [INFO][6329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:29.891 [INFO][6466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" HandleID="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Workload="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:29.891 [INFO][6466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" HandleID="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Workload="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b04f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zqk9r", "timestamp":"2026-01-22 00:34:29.891257933 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:29.891 [INFO][6466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:29.891 [INFO][6466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:29.968 [INFO][6466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.169 [INFO][6466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.517 [INFO][6466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.571 [INFO][6466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.619 [INFO][6466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.649 [INFO][6466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.649 [INFO][6466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.684 [INFO][6466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5 Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.848 [INFO][6466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.958 [INFO][6466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.962 [INFO][6466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" host="localhost" Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.992 [INFO][6466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:34:31.476072 containerd[1635]: 2026-01-22 00:34:30.992 [INFO][6466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" HandleID="k8s-pod-network.fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Workload="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.490564 containerd[1635]: 2026-01-22 00:34:31.055 [INFO][6329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zqk9r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2cb94888-9f16-48f2-8fc7-64c6889ef0fc", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zqk9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie057b862e53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:31.490564 containerd[1635]: 2026-01-22 00:34:31.058 [INFO][6329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.490564 containerd[1635]: 2026-01-22 00:34:31.062 [INFO][6329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie057b862e53 ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.490564 containerd[1635]: 2026-01-22 00:34:31.168 [INFO][6329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.495164 containerd[1635]: 2026-01-22 00:34:31.183 [INFO][6329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zqk9r-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2cb94888-9f16-48f2-8fc7-64c6889ef0fc", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5", Pod:"coredns-66bc5c9577-zqk9r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie057b862e53", MAC:"de:96:7e:cf:af:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:34:31.495164 containerd[1635]: 2026-01-22 00:34:31.384 [INFO][6329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" Namespace="kube-system" Pod="coredns-66bc5c9577-zqk9r" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zqk9r-eth0" Jan 22 00:34:31.514025 kubelet[2961]: I0122 00:34:31.513572 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zlbdn" podStartSLOduration=186.513545765 podStartE2EDuration="3m6.513545765s" podCreationTimestamp="2026-01-22 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:31.50145303 +0000 UTC m=+191.530358595" watchObservedRunningTime="2026-01-22 00:34:31.513545765 +0000 UTC m=+191.542451339" Jan 22 00:34:31.635000 audit: BPF prog-id=244 op=LOAD Jan 22 00:34:31.637000 audit: BPF prog-id=245 op=LOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a2238 a2=98 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=245 op=UNLOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=246 op=LOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a2488 a2=98 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=247 op=LOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a2218 a2=98 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=247 op=UNLOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=246 op=UNLOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.637000 audit: BPF prog-id=248 op=LOAD Jan 22 00:34:31.637000 audit[6600]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a26e8 a2=98 a3=0 items=0 ppid=6581 pid=6600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135363139326530366266633631353234353533383565326339663834 Jan 22 00:34:31.646026 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:31.656000 audit[6645]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=6645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:31.656000 audit[6645]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffddae127b0 a2=0 a3=7ffddae1279c items=0 ppid=3099 pid=6645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:31.713000 audit[6645]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=6645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:31.713000 audit[6645]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffddae127b0 a2=0 a3=0 items=0 ppid=3099 pid=6645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:31.746230 containerd[1635]: time="2026-01-22T00:34:31.746179427Z" level=info msg="connecting to shim fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5" address="unix:///run/containerd/s/584437cd0174f10f56735a0a9044354f8370aa307db5d2efb2737c19b218c085" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:34:31.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:31.979776 systemd[1]: Started cri-containerd-fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5.scope - libcontainer container fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5. Jan 22 00:34:32.038000 audit[6686]: NETFILTER_CFG table=filter:141 family=2 entries=62 op=nft_register_chain pid=6686 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:34:32.038000 audit[6686]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7ffe9c342720 a2=0 a3=7ffe9c34270c items=0 ppid=5682 pid=6686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:34:32.062000 audit: BPF prog-id=249 op=LOAD Jan 22 00:34:32.065000 audit: BPF prog-id=250 op=LOAD Jan 22 00:34:32.065000 audit[6667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.065000 audit: BPF prog-id=250 op=UNLOAD Jan 22 00:34:32.065000 audit[6667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.068000 audit: BPF prog-id=251 op=LOAD Jan 22 00:34:32.068000 audit[6667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.070000 audit: BPF prog-id=252 op=LOAD Jan 22 00:34:32.070000 audit[6667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.070000 audit: BPF prog-id=252 op=UNLOAD Jan 22 00:34:32.070000 audit[6667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.070000 audit: BPF prog-id=251 op=UNLOAD Jan 22 00:34:32.070000 audit[6667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.070000 audit: BPF prog-id=253 op=LOAD Jan 22 00:34:32.070000 audit[6667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6655 pid=6667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653031313462363361363733343963303839376137663966326439 Jan 22 00:34:32.081290 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 22 00:34:32.081424 containerd[1635]: time="2026-01-22T00:34:32.081122735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79ff4d8844-xf4gm,Uid:cc699319-6548-46e3-b846-fb40b8bdda3a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a56192e06bfc6152455385e2c9f8405e53bc33d355047994626f920440aff976\"" Jan 22 00:34:32.095501 containerd[1635]: time="2026-01-22T00:34:32.095210813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:32.262319 containerd[1635]: time="2026-01-22T00:34:32.259396056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:32.270587 containerd[1635]: time="2026-01-22T00:34:32.268970144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:32.270587 containerd[1635]: time="2026-01-22T00:34:32.269133748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:32.279123 kubelet[2961]: E0122 00:34:32.277251 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:32.279123 kubelet[2961]: E0122 00:34:32.277497 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:32.279123 kubelet[2961]: E0122 00:34:32.277609 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:32.299717 kubelet[2961]: E0122 00:34:32.297766 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:32.327727 containerd[1635]: time="2026-01-22T00:34:32.327120328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zqk9r,Uid:2cb94888-9f16-48f2-8fc7-64c6889ef0fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5\"" Jan 22 00:34:32.342544 kubelet[2961]: E0122 00:34:32.342284 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:32.356513 kubelet[2961]: E0122 00:34:32.356446 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:32.374506 kubelet[2961]: E0122 00:34:32.374442 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:32.377785 kubelet[2961]: E0122 00:34:32.377168 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:32.394136 kubelet[2961]: E0122 00:34:32.373609 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:34:32.396011 kubelet[2961]: E0122 00:34:32.392784 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:34:32.415951 containerd[1635]: time="2026-01-22T00:34:32.415721924Z" level=info msg="CreateContainer within sandbox \"fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:34:32.554102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871665172.mount: Deactivated successfully. Jan 22 00:34:32.568195 containerd[1635]: time="2026-01-22T00:34:32.568131604Z" level=info msg="Container 3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:34:32.622918 containerd[1635]: time="2026-01-22T00:34:32.619014196Z" level=info msg="CreateContainer within sandbox \"fae0114b63a67349c0897a7f9f2d9d85e49d652cbdf3730efd5368167111d4b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb\"" Jan 22 00:34:32.626331 containerd[1635]: time="2026-01-22T00:34:32.626290122Z" level=info msg="StartContainer for \"3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb\"" Jan 22 00:34:32.638151 containerd[1635]: time="2026-01-22T00:34:32.638085549Z" level=info msg="connecting to shim 3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb" address="unix:///run/containerd/s/584437cd0174f10f56735a0a9044354f8370aa307db5d2efb2737c19b218c085" protocol=ttrpc version=3 Jan 22 00:34:32.760549 systemd[1]: Started cri-containerd-3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb.scope - libcontainer container 3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb. Jan 22 00:34:32.788514 systemd-networkd[1529]: calie057b862e53: Gained IPv6LL Jan 22 00:34:32.829000 audit: BPF prog-id=254 op=LOAD Jan 22 00:34:32.831000 audit: BPF prog-id=255 op=LOAD Jan 22 00:34:32.831000 audit[6709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.832000 audit: BPF prog-id=255 op=UNLOAD Jan 22 00:34:32.832000 audit[6709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.835000 audit: BPF prog-id=256 op=LOAD Jan 22 00:34:32.835000 audit[6709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.835000 audit: BPF prog-id=257 op=LOAD Jan 22 00:34:32.835000 audit[6709]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.836000 audit: BPF prog-id=257 op=UNLOAD Jan 22 00:34:32.836000 audit[6709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.836000 audit: BPF prog-id=256 op=UNLOAD Jan 22 00:34:32.836000 audit[6709]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.836000 audit: BPF prog-id=258 op=LOAD Jan 22 00:34:32.836000 audit[6709]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=6655 pid=6709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365356238633832393830663130633266396463633635376239643032 Jan 22 00:34:32.954000 audit[6729]: NETFILTER_CFG table=filter:142 family=2 entries=20 op=nft_register_rule pid=6729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:32.954000 audit[6729]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff696ea600 a2=0 a3=7fff696ea5ec items=0 ppid=3099 pid=6729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:32.998000 audit[6729]: NETFILTER_CFG table=nat:143 family=2 entries=14 op=nft_register_rule pid=6729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:32.998000 audit[6729]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff696ea600 a2=0 a3=0 items=0 ppid=3099 pid=6729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:32.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:33.034123 containerd[1635]: time="2026-01-22T00:34:33.034073332Z" level=info msg="StartContainer for \"3e5b8c82980f10c2f9dcc657b9d0268638b5413a3bbba4e12e835e524e6300fb\" returns successfully" Jan 22 00:34:33.394398 kubelet[2961]: E0122 00:34:33.391109 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:33.397237 kubelet[2961]: E0122 00:34:33.397198 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:33.400003 kubelet[2961]: E0122 00:34:33.398202 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:33.610790 kubelet[2961]: I0122 00:34:33.610601 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zqk9r" podStartSLOduration=188.610576517 podStartE2EDuration="3m8.610576517s" podCreationTimestamp="2026-01-22 00:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:34:33.549281125 +0000 UTC m=+193.578186709" watchObservedRunningTime="2026-01-22 00:34:33.610576517 +0000 UTC m=+193.639482081" Jan 22 00:34:34.139062 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:35860.service - OpenSSH per-connection server daemon (10.0.0.1:35860). Jan 22 00:34:34.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.25:22-10.0.0.1:35860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:34.147000 audit[6746]: NETFILTER_CFG table=filter:144 family=2 entries=17 op=nft_register_rule pid=6746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:34.147000 audit[6746]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda59bdf70 a2=0 a3=7ffda59bdf5c items=0 ppid=3099 pid=6746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:34.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:34.309000 audit[6746]: NETFILTER_CFG table=nat:145 family=2 entries=47 op=nft_register_chain pid=6746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:34.326480 kernel: kauditd_printk_skb: 152 callbacks suppressed Jan 22 00:34:34.326580 kernel: audit: type=1325 audit(1769042074.309:871): table=nat:145 family=2 entries=47 op=nft_register_chain pid=6746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:34.309000 audit[6746]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffda59bdf70 a2=0 a3=7ffda59bdf5c items=0 ppid=3099 pid=6746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:34.385476 sshd[6748]: Accepted publickey for core from 10.0.0.1 port 35860 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:34.398072 kernel: audit: type=1300 audit(1769042074.309:871): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffda59bdf70 a2=0 a3=7ffda59bdf5c items=0 ppid=3099 pid=6746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:34.398190 kernel: audit: type=1327 audit(1769042074.309:871): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:34.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:34.413230 sshd-session[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:34.381000 audit[6748]: USER_ACCT pid=6748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.427529 kernel: audit: type=1101 audit(1769042074.381:872): pid=6748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.428223 kubelet[2961]: E0122 00:34:34.428014 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:34.441262 systemd-logind[1609]: New session 23 of user core. Jan 22 00:34:34.404000 audit[6748]: CRED_ACQ pid=6748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.527256 kernel: audit: type=1103 audit(1769042074.404:873): pid=6748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.527409 kernel: audit: type=1006 audit(1769042074.407:874): pid=6748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 22 00:34:34.527451 kernel: audit: type=1300 audit(1769042074.407:874): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2ebcd30 a2=3 a3=0 items=0 ppid=1 pid=6748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:34.407000 audit[6748]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde2ebcd30 a2=3 a3=0 items=0 ppid=1 pid=6748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:34.562466 kernel: audit: type=1327 audit(1769042074.407:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:34.407000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:34.576469 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 22 00:34:34.588000 audit[6748]: USER_START pid=6748 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.633404 kernel: audit: type=1105 audit(1769042074.588:875): pid=6748 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.593000 audit[6752]: CRED_ACQ pid=6752 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:34.677103 kernel: audit: type=1103 audit(1769042074.593:876): pid=6752 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:35.151401 sshd[6752]: Connection closed by 10.0.0.1 port 35860 Jan 22 00:34:35.157049 sshd-session[6748]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:35.166000 audit[6748]: USER_END pid=6748 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:35.166000 audit[6748]: CRED_DISP pid=6748 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:35.179386 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:35860.service: Deactivated successfully. Jan 22 00:34:35.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.25:22-10.0.0.1:35860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:35.196781 systemd[1]: session-23.scope: Deactivated successfully. Jan 22 00:34:35.206331 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Jan 22 00:34:35.211441 systemd-logind[1609]: Removed session 23. Jan 22 00:34:35.425164 kubelet[2961]: E0122 00:34:35.424585 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:40.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.25:22-10.0.0.1:47824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:40.199135 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:47824.service - OpenSSH per-connection server daemon (10.0.0.1:47824). Jan 22 00:34:40.218012 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 22 00:34:40.218140 kernel: audit: type=1130 audit(1769042080.197:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.25:22-10.0.0.1:47824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:40.381000 audit[6770]: USER_ACCT pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.386316 sshd[6770]: Accepted publickey for core from 10.0.0.1 port 47824 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:40.397602 sshd-session[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:40.429418 kernel: audit: type=1101 audit(1769042080.381:881): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.390000 audit[6770]: CRED_ACQ pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.464090 systemd-logind[1609]: New session 24 of user core. Jan 22 00:34:40.492448 kernel: audit: type=1103 audit(1769042080.390:882): pid=6770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.492576 kernel: audit: type=1006 audit(1769042080.390:883): pid=6770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 22 00:34:40.390000 audit[6770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6b7870a0 a2=3 a3=0 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.536050 kernel: audit: type=1300 audit(1769042080.390:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6b7870a0 a2=3 a3=0 items=0 ppid=1 pid=6770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:40.390000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:40.542291 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 22 00:34:40.555299 kernel: audit: type=1327 audit(1769042080.390:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:40.565000 audit[6770]: USER_START pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.569000 audit[6773]: CRED_ACQ pid=6773 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.671473 kernel: audit: type=1105 audit(1769042080.565:884): pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.671594 kernel: audit: type=1103 audit(1769042080.569:885): pid=6773 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.909046 sshd[6773]: Connection closed by 10.0.0.1 port 47824 Jan 22 00:34:40.917156 sshd-session[6770]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:40.928000 audit[6770]: USER_END pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.940203 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:47836.service - OpenSSH per-connection server daemon (10.0.0.1:47836). Jan 22 00:34:40.941586 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:47824.service: Deactivated successfully. Jan 22 00:34:40.949292 systemd[1]: session-24.scope: Deactivated successfully. Jan 22 00:34:40.954020 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Jan 22 00:34:40.957402 systemd-logind[1609]: Removed session 24. Jan 22 00:34:40.928000 audit[6770]: CRED_DISP pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:41.008126 kernel: audit: type=1106 audit(1769042080.928:886): pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:41.008252 kernel: audit: type=1104 audit(1769042080.928:887): pid=6770 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:40.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.25:22-10.0.0.1:47836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:40.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.25:22-10.0.0.1:47824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:41.147000 audit[6783]: USER_ACCT pid=6783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:41.150512 sshd[6783]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:41.160000 audit[6783]: CRED_ACQ pid=6783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:41.160000 audit[6783]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38c237a0 a2=3 a3=0 items=0 ppid=1 pid=6783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:41.160000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:41.163596 sshd-session[6783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:41.192106 systemd-logind[1609]: New session 25 of user core. Jan 22 00:34:41.209641 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 22 00:34:41.221000 audit[6783]: USER_START pid=6783 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:41.226000 audit[6789]: CRED_ACQ pid=6789 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:42.675024 sshd[6789]: Connection closed by 10.0.0.1 port 47836 Jan 22 00:34:42.678251 sshd-session[6783]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:42.681000 audit[6783]: USER_END pid=6783 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:42.681000 audit[6783]: CRED_DISP pid=6783 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:42.705217 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:47836.service: Deactivated successfully. Jan 22 00:34:42.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.25:22-10.0.0.1:47836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:42.721353 systemd[1]: session-25.scope: Deactivated successfully. Jan 22 00:34:42.736267 systemd-logind[1609]: Session 25 logged out. Waiting for processes to exit. Jan 22 00:34:42.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.25:22-10.0.0.1:47902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:42.786105 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:47902.service - OpenSSH per-connection server daemon (10.0.0.1:47902). Jan 22 00:34:42.789552 systemd-logind[1609]: Removed session 25. Jan 22 00:34:43.000621 containerd[1635]: time="2026-01-22T00:34:43.000469774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:34:43.212000 audit[6832]: USER_ACCT pid=6832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:43.216348 sshd[6832]: Accepted publickey for core from 10.0.0.1 port 47902 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:43.215000 audit[6832]: CRED_ACQ pid=6832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:43.215000 audit[6832]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea2ceb250 a2=3 a3=0 items=0 ppid=1 pid=6832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:43.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:43.229561 containerd[1635]: time="2026-01-22T00:34:43.226474609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:43.225371 sshd-session[6832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:43.276015 containerd[1635]: time="2026-01-22T00:34:43.271325106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:34:43.276015 containerd[1635]: time="2026-01-22T00:34:43.271462411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:43.273676 systemd-logind[1609]: New session 26 of user core. Jan 22 00:34:43.282097 kubelet[2961]: E0122 00:34:43.278381 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:34:43.282097 kubelet[2961]: E0122 00:34:43.278595 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:34:43.282097 kubelet[2961]: E0122 00:34:43.278691 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:43.282097 kubelet[2961]: E0122 00:34:43.279025 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:43.279653 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 22 00:34:43.304000 audit[6832]: USER_START pid=6832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:43.315000 audit[6837]: CRED_ACQ pid=6837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:43.456519 kubelet[2961]: E0122 00:34:43.456349 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:34:44.016249 kubelet[2961]: E0122 00:34:44.010512 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:44.046689 containerd[1635]: time="2026-01-22T00:34:44.040174758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:34:44.148151 containerd[1635]: time="2026-01-22T00:34:44.148092115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:44.157121 containerd[1635]: time="2026-01-22T00:34:44.157046342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:34:44.159068 containerd[1635]: time="2026-01-22T00:34:44.159034517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:44.167962 kubelet[2961]: E0122 00:34:44.166325 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:34:44.167962 kubelet[2961]: E0122 00:34:44.166389 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:34:44.167962 kubelet[2961]: E0122 00:34:44.166483 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:44.167962 kubelet[2961]: E0122 00:34:44.166532 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:34:45.558536 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 22 00:34:45.559000 kernel: audit: type=1325 audit(1769042085.525:904): table=filter:146 family=2 entries=14 op=nft_register_rule pid=6853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:45.525000 audit[6853]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=6853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:45.533035 sshd-session[6832]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:45.562123 sshd[6837]: Connection closed by 10.0.0.1 port 47902 Jan 22 00:34:45.539000 audit[6832]: USER_END pid=6832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:45.600084 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:47902.service: Deactivated successfully. Jan 22 00:34:45.606120 kernel: audit: type=1106 audit(1769042085.539:905): pid=6832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:45.540000 audit[6832]: CRED_DISP pid=6832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:45.640304 kernel: audit: type=1104 audit(1769042085.540:906): pid=6832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:45.612631 systemd[1]: session-26.scope: Deactivated successfully. Jan 22 00:34:45.618166 systemd[1]: session-26.scope: Consumed 1.162s CPU time, 41.3M memory peak. Jan 22 00:34:45.621490 systemd-logind[1609]: Session 26 logged out. Waiting for processes to exit. Jan 22 00:34:45.634531 systemd[1]: Started sshd@26-10.0.0.25:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Jan 22 00:34:45.638664 systemd-logind[1609]: Removed session 26. Jan 22 00:34:45.525000 audit[6853]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9c00de90 a2=0 a3=7ffd9c00de7c items=0 ppid=3099 pid=6853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:45.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:45.572000 audit[6853]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=6853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:45.702090 kernel: audit: type=1300 audit(1769042085.525:904): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9c00de90 a2=0 a3=7ffd9c00de7c items=0 ppid=3099 pid=6853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:45.702158 kernel: audit: type=1327 audit(1769042085.525:904): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:45.702193 kernel: audit: type=1325 audit(1769042085.572:907): table=nat:147 family=2 entries=20 op=nft_register_rule pid=6853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:45.572000 audit[6853]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9c00de90 a2=0 a3=7ffd9c00de7c items=0 ppid=3099 pid=6853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:45.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:45.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.25:22-10.0.0.1:47902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.769077 kernel: audit: type=1300 audit(1769042085.572:907): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9c00de90 a2=0 a3=7ffd9c00de7c items=0 ppid=3099 pid=6853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:45.769246 kernel: audit: type=1327 audit(1769042085.572:907): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:45.769294 kernel: audit: type=1131 audit(1769042085.602:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.25:22-10.0.0.1:47902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.769330 kernel: audit: type=1130 audit(1769042085.633:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.25:22-10.0.0.1:50606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:45.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.25:22-10.0.0.1:50606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:46.003711 containerd[1635]: time="2026-01-22T00:34:46.002336779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:34:46.189216 containerd[1635]: time="2026-01-22T00:34:46.183513835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:46.207162 containerd[1635]: time="2026-01-22T00:34:46.207085194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:34:46.208017 containerd[1635]: time="2026-01-22T00:34:46.207361449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:46.211348 kubelet[2961]: E0122 00:34:46.211295 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:34:46.214114 kubelet[2961]: E0122 00:34:46.212591 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:34:46.214114 kubelet[2961]: E0122 00:34:46.213104 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:46.232033 containerd[1635]: time="2026-01-22T00:34:46.231587137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:34:46.304000 audit[6858]: USER_ACCT pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:46.309716 sshd[6858]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:46.315000 audit[6858]: CRED_ACQ pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:46.317000 audit[6858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3de1edc0 a2=3 a3=0 items=0 ppid=1 pid=6858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:46.317000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:46.318687 sshd-session[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:46.364223 systemd-logind[1609]: New session 27 of user core. Jan 22 00:34:46.367505 containerd[1635]: time="2026-01-22T00:34:46.367107191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:46.368323 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 22 00:34:46.376027 containerd[1635]: time="2026-01-22T00:34:46.374554936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:34:46.376300 containerd[1635]: time="2026-01-22T00:34:46.376175717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:46.376576 kubelet[2961]: E0122 00:34:46.376525 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:34:46.376980 kubelet[2961]: E0122 00:34:46.376695 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:34:46.377295 kubelet[2961]: E0122 00:34:46.377267 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:46.377644 kubelet[2961]: E0122 00:34:46.377593 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:34:46.382000 audit[6858]: USER_START pid=6858 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:46.391000 audit[6861]: CRED_ACQ pid=6861 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:46.699000 audit[6869]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=6869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:46.699000 audit[6869]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff7c5ade50 a2=0 a3=7fff7c5ade3c items=0 ppid=3099 pid=6869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:46.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:46.716000 audit[6869]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=6869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:46.716000 audit[6869]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7c5ade50 a2=0 a3=0 items=0 ppid=3099 pid=6869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:46.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:46.993716 containerd[1635]: time="2026-01-22T00:34:46.993112700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:47.098580 containerd[1635]: time="2026-01-22T00:34:47.096435010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:47.110535 containerd[1635]: time="2026-01-22T00:34:47.109511820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:47.110535 containerd[1635]: time="2026-01-22T00:34:47.109663323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:47.112332 kubelet[2961]: E0122 00:34:47.112081 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:47.112332 kubelet[2961]: E0122 00:34:47.112157 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:47.112476 kubelet[2961]: E0122 00:34:47.112380 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79ff4d8844-vkmvl_calico-apiserver(e1d20bb6-82c3-4af1-9823-e27799a9a91a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:47.112476 kubelet[2961]: E0122 00:34:47.112423 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:34:47.122098 containerd[1635]: time="2026-01-22T00:34:47.117523402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:47.243431 containerd[1635]: time="2026-01-22T00:34:47.243372567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:47.267272 containerd[1635]: time="2026-01-22T00:34:47.266536029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:47.268403 containerd[1635]: time="2026-01-22T00:34:47.268355509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:47.279006 kubelet[2961]: E0122 00:34:47.275284 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:47.279006 kubelet[2961]: E0122 00:34:47.275433 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:47.279006 kubelet[2961]: E0122 00:34:47.275544 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:47.279006 kubelet[2961]: E0122 00:34:47.276120 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:47.448497 sshd[6861]: Connection closed by 10.0.0.1 port 50606 Jan 22 00:34:47.453282 sshd-session[6858]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:47.456000 audit[6858]: USER_END pid=6858 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:47.457000 audit[6858]: CRED_DISP pid=6858 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:47.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.25:22-10.0.0.1:50606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:47.474074 systemd[1]: sshd@26-10.0.0.25:22-10.0.0.1:50606.service: Deactivated successfully. Jan 22 00:34:47.478722 systemd[1]: session-27.scope: Deactivated successfully. Jan 22 00:34:47.489303 systemd-logind[1609]: Session 27 logged out. Waiting for processes to exit. Jan 22 00:34:47.495624 systemd-logind[1609]: Removed session 27. Jan 22 00:34:47.503584 systemd[1]: Started sshd@27-10.0.0.25:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Jan 22 00:34:47.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.25:22-10.0.0.1:50620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:47.670000 audit[6874]: USER_ACCT pid=6874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:47.675000 audit[6874]: CRED_ACQ pid=6874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:47.677202 sshd[6874]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:47.675000 audit[6874]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac062ef0 a2=3 a3=0 items=0 ppid=1 pid=6874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:47.675000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:47.677716 sshd-session[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:47.724546 systemd-logind[1609]: New session 28 of user core. Jan 22 00:34:47.756326 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 22 00:34:47.780000 audit[6874]: USER_START pid=6874 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:47.785000 audit[6877]: CRED_ACQ pid=6877 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:48.117033 sshd[6877]: Connection closed by 10.0.0.1 port 50620 Jan 22 00:34:48.119088 sshd-session[6874]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:48.155000 audit[6874]: USER_END pid=6874 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:48.157000 audit[6874]: CRED_DISP pid=6874 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:48.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.25:22-10.0.0.1:50620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:48.181590 systemd[1]: sshd@27-10.0.0.25:22-10.0.0.1:50620.service: Deactivated successfully. Jan 22 00:34:48.186453 systemd-logind[1609]: Session 28 logged out. Waiting for processes to exit. Jan 22 00:34:48.186991 systemd[1]: session-28.scope: Deactivated successfully. Jan 22 00:34:48.192652 systemd-logind[1609]: Removed session 28. Jan 22 00:34:53.197077 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 22 00:34:53.197239 kernel: audit: type=1130 audit(1769042093.168:929): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:50650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:53.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:50650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:53.169700 systemd[1]: Started sshd@28-10.0.0.25:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Jan 22 00:34:53.452000 audit[6893]: USER_ACCT pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.459463 sshd[6893]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:53.460318 sshd-session[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:53.480682 kernel: audit: type=1101 audit(1769042093.452:930): pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.504231 kernel: audit: type=1103 audit(1769042093.457:931): pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.457000 audit[6893]: CRED_ACQ pid=6893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.487108 systemd-logind[1609]: New session 29 of user core. Jan 22 00:34:53.525957 kernel: audit: type=1006 audit(1769042093.457:932): pid=6893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 22 00:34:53.509220 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 22 00:34:53.457000 audit[6893]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff11b16130 a2=3 a3=0 items=0 ppid=1 pid=6893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.568969 kernel: audit: type=1300 audit(1769042093.457:932): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff11b16130 a2=3 a3=0 items=0 ppid=1 pid=6893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:53.570402 kernel: audit: type=1327 audit(1769042093.457:932): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:53.457000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:53.518000 audit[6893]: USER_START pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.535000 audit[6896]: CRED_ACQ pid=6896 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.642072 kernel: audit: type=1105 audit(1769042093.518:933): pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.642316 kernel: audit: type=1103 audit(1769042093.535:934): pid=6896 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.902194 sshd[6896]: Connection closed by 10.0.0.1 port 50650 Jan 22 00:34:53.903582 sshd-session[6893]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:53.907000 audit[6893]: USER_END pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.922000 audit[6893]: CRED_DISP pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.936330 systemd[1]: sshd@28-10.0.0.25:22-10.0.0.1:50650.service: Deactivated successfully. Jan 22 00:34:53.942589 systemd[1]: session-29.scope: Deactivated successfully. Jan 22 00:34:53.949692 systemd-logind[1609]: Session 29 logged out. Waiting for processes to exit. Jan 22 00:34:53.952233 systemd-logind[1609]: Removed session 29. Jan 22 00:34:53.959760 kernel: audit: type=1106 audit(1769042093.907:935): pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.960304 kernel: audit: type=1104 audit(1769042093.922:936): pid=6893 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:53.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.25:22-10.0.0.1:50650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:56.038734 containerd[1635]: time="2026-01-22T00:34:56.038216571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:34:56.225370 containerd[1635]: time="2026-01-22T00:34:56.224949447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:34:56.231059 containerd[1635]: time="2026-01-22T00:34:56.229457640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:34:56.231209 containerd[1635]: time="2026-01-22T00:34:56.230107466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:34:56.233179 kubelet[2961]: E0122 00:34:56.232500 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:56.233179 kubelet[2961]: E0122 00:34:56.232557 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:34:56.233179 kubelet[2961]: E0122 00:34:56.232691 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-567c775cb4-2tqd7_calico-apiserver(a8109f84-107e-4926-bb88-cd99083f8125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:34:56.233179 kubelet[2961]: E0122 00:34:56.232726 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:34:56.994457 kubelet[2961]: E0122 00:34:56.994214 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:34:57.986695 kubelet[2961]: E0122 00:34:57.986439 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:34:57.986695 kubelet[2961]: E0122 00:34:57.986517 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:34:58.608000 audit[6915]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:58.615493 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:34:58.615736 kernel: audit: type=1325 audit(1769042098.608:938): table=filter:150 family=2 entries=26 op=nft_register_rule pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:58.608000 audit[6915]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6c05fcb0 a2=0 a3=7ffd6c05fc9c items=0 ppid=3099 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:58.653967 kernel: audit: type=1300 audit(1769042098.608:938): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6c05fcb0 a2=0 a3=7ffd6c05fc9c items=0 ppid=3099 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:58.654118 kernel: audit: type=1327 audit(1769042098.608:938): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:58.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:58.666000 audit[6915]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:58.666000 audit[6915]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6c05fcb0 a2=0 a3=7ffd6c05fc9c items=0 ppid=3099 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:58.701783 kernel: audit: type=1325 audit(1769042098.666:939): table=nat:151 family=2 entries=104 op=nft_register_chain pid=6915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:34:58.702092 kernel: audit: type=1300 audit(1769042098.666:939): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6c05fcb0 a2=0 a3=7ffd6c05fc9c items=0 ppid=3099 pid=6915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:58.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:58.711413 kernel: audit: type=1327 audit(1769042098.666:939): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:34:58.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:46162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:58.927295 systemd[1]: Started sshd@29-10.0.0.25:22-10.0.0.1:46162.service - OpenSSH per-connection server daemon (10.0.0.1:46162). Jan 22 00:34:58.947092 kernel: audit: type=1130 audit(1769042098.925:940): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:46162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.043000 audit[6917]: USER_ACCT pid=6917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.051143 sshd-session[6917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:34:59.054638 sshd[6917]: Accepted publickey for core from 10.0.0.1 port 46162 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:34:59.062630 systemd-logind[1609]: New session 30 of user core. Jan 22 00:34:59.069060 kernel: audit: type=1101 audit(1769042099.043:941): pid=6917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.048000 audit[6917]: CRED_ACQ pid=6917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.098177 kernel: audit: type=1103 audit(1769042099.048:942): pid=6917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.098358 kernel: audit: type=1006 audit(1769042099.048:943): pid=6917 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 22 00:34:59.048000 audit[6917]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3de92740 a2=3 a3=0 items=0 ppid=1 pid=6917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:34:59.048000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:34:59.111989 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 22 00:34:59.122000 audit[6917]: USER_START pid=6917 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.125000 audit[6920]: CRED_ACQ pid=6920 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.364685 sshd[6920]: Connection closed by 10.0.0.1 port 46162 Jan 22 00:34:59.365380 sshd-session[6917]: pam_unix(sshd:session): session closed for user core Jan 22 00:34:59.367000 audit[6917]: USER_END pid=6917 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.367000 audit[6917]: CRED_DISP pid=6917 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:34:59.377493 systemd-logind[1609]: Session 30 logged out. Waiting for processes to exit. Jan 22 00:34:59.379120 systemd[1]: sshd@29-10.0.0.25:22-10.0.0.1:46162.service: Deactivated successfully. Jan 22 00:34:59.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.25:22-10.0.0.1:46162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:34:59.384105 systemd[1]: session-30.scope: Deactivated successfully. Jan 22 00:34:59.391170 systemd-logind[1609]: Removed session 30. Jan 22 00:34:59.994174 kubelet[2961]: E0122 00:34:59.993989 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b" Jan 22 00:35:00.989582 kubelet[2961]: E0122 00:35:00.989453 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-vkmvl" podUID="e1d20bb6-82c3-4af1-9823-e27799a9a91a" Jan 22 00:35:04.407020 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 22 00:35:04.407160 kernel: audit: type=1130 audit(1769042104.384:949): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:04.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:04.385383 systemd[1]: Started sshd@30-10.0.0.25:22-10.0.0.1:46192.service - OpenSSH per-connection server daemon (10.0.0.1:46192). Jan 22 00:35:04.491000 audit[6944]: USER_ACCT pid=6944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.495604 sshd[6944]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:35:04.501516 sshd-session[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:35:04.519207 kernel: audit: type=1101 audit(1769042104.491:950): pid=6944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.498000 audit[6944]: CRED_ACQ pid=6944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.521028 systemd-logind[1609]: New session 31 of user core. Jan 22 00:35:04.558231 kernel: audit: type=1103 audit(1769042104.498:951): pid=6944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.561104 kernel: audit: type=1006 audit(1769042104.498:952): pid=6944 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 22 00:35:04.561149 kernel: audit: type=1300 audit(1769042104.498:952): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff67e39c80 a2=3 a3=0 items=0 ppid=1 pid=6944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.498000 audit[6944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff67e39c80 a2=3 a3=0 items=0 ppid=1 pid=6944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:04.498000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:35:04.592560 kernel: audit: type=1327 audit(1769042104.498:952): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:35:04.598640 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 22 00:35:04.610000 audit[6944]: USER_START pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.638305 kernel: audit: type=1105 audit(1769042104.610:953): pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.638471 kernel: audit: type=1103 audit(1769042104.614:954): pid=6947 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.614000 audit[6947]: CRED_ACQ pid=6947 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.967074 sshd[6947]: Connection closed by 10.0.0.1 port 46192 Jan 22 00:35:04.967219 sshd-session[6944]: pam_unix(sshd:session): session closed for user core Jan 22 00:35:04.973000 audit[6944]: USER_END pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.981161 systemd-logind[1609]: Session 31 logged out. Waiting for processes to exit. Jan 22 00:35:04.985195 systemd[1]: sshd@30-10.0.0.25:22-10.0.0.1:46192.service: Deactivated successfully. Jan 22 00:35:04.990270 systemd[1]: session-31.scope: Deactivated successfully. Jan 22 00:35:04.995082 systemd-logind[1609]: Removed session 31. Jan 22 00:35:05.002623 kernel: audit: type=1106 audit(1769042104.973:955): pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:05.002707 kernel: audit: type=1104 audit(1769042104.973:956): pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.973000 audit[6944]: CRED_DISP pid=6944 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:04.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.25:22-10.0.0.1:46192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:07.993526 containerd[1635]: time="2026-01-22T00:35:07.992618966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:35:08.084080 containerd[1635]: time="2026-01-22T00:35:08.083676896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:08.095278 containerd[1635]: time="2026-01-22T00:35:08.095204960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:35:08.097068 containerd[1635]: time="2026-01-22T00:35:08.095654768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:08.097164 kubelet[2961]: E0122 00:35:08.096636 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:08.097164 kubelet[2961]: E0122 00:35:08.096700 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:35:08.097164 kubelet[2961]: E0122 00:35:08.097070 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-956jd_calico-system(6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:08.097164 kubelet[2961]: E0122 00:35:08.097126 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-956jd" podUID="6839bde6-4689-4cd8-9f1c-2a5a8b19cdc2" Jan 22 00:35:08.987785 kubelet[2961]: E0122 00:35:08.987662 2961 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 22 00:35:08.989266 kubelet[2961]: E0122 00:35:08.988630 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-567c775cb4-2tqd7" podUID="a8109f84-107e-4926-bb88-cd99083f8125" Jan 22 00:35:08.997232 containerd[1635]: time="2026-01-22T00:35:08.996715180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:35:09.104670 containerd[1635]: time="2026-01-22T00:35:09.100772888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:09.110411 containerd[1635]: time="2026-01-22T00:35:09.110255846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:35:09.110577 containerd[1635]: time="2026-01-22T00:35:09.110465206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:09.111293 kubelet[2961]: E0122 00:35:09.111111 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:09.111293 kubelet[2961]: E0122 00:35:09.111257 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:35:09.112016 kubelet[2961]: E0122 00:35:09.111544 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79ff4d8844-xf4gm_calico-apiserver(cc699319-6548-46e3-b846-fb40b8bdda3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:09.112016 kubelet[2961]: E0122 00:35:09.111592 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79ff4d8844-xf4gm" podUID="cc699319-6548-46e3-b846-fb40b8bdda3a" Jan 22 00:35:09.114544 containerd[1635]: time="2026-01-22T00:35:09.112377521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:35:09.191578 containerd[1635]: time="2026-01-22T00:35:09.191360971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:09.195741 containerd[1635]: time="2026-01-22T00:35:09.195242778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:35:09.195741 containerd[1635]: time="2026-01-22T00:35:09.195417052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:09.196550 kubelet[2961]: E0122 00:35:09.196371 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:09.196550 kubelet[2961]: E0122 00:35:09.196436 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:35:09.196550 kubelet[2961]: E0122 00:35:09.196542 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6fd67fc48-sbqdz_calico-system(cddc50d2-bbfe-4bdb-8697-ec1251db07b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:09.196702 kubelet[2961]: E0122 00:35:09.196592 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fd67fc48-sbqdz" podUID="cddc50d2-bbfe-4bdb-8697-ec1251db07b4" Jan 22 00:35:09.989494 systemd[1]: Started sshd@31-10.0.0.25:22-10.0.0.1:38668.service - OpenSSH per-connection server daemon (10.0.0.1:38668). Jan 22 00:35:10.016731 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:35:10.016795 kernel: audit: type=1130 audit(1769042109.988:958): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:09.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:10.157067 kernel: audit: type=1101 audit(1769042110.124:959): pid=6960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.124000 audit[6960]: USER_ACCT pid=6960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.146186 systemd-logind[1609]: New session 32 of user core. Jan 22 00:35:10.129698 sshd-session[6960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:35:10.159641 sshd[6960]: Accepted publickey for core from 10.0.0.1 port 38668 ssh2: RSA SHA256:qtEaH7fZdyVsdwtTQgN3pcjvZV5CZs6IZV1K7f9HeKU Jan 22 00:35:10.165207 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 22 00:35:10.126000 audit[6960]: CRED_ACQ pid=6960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.196958 kernel: audit: type=1103 audit(1769042110.126:960): pid=6960 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.250007 kernel: audit: type=1006 audit(1769042110.127:961): pid=6960 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 22 00:35:10.250177 kernel: audit: type=1300 audit(1769042110.127:961): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce27b9a00 a2=3 a3=0 items=0 ppid=1 pid=6960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:10.127000 audit[6960]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce27b9a00 a2=3 a3=0 items=0 ppid=1 pid=6960 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:35:10.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:35:10.263466 kernel: audit: type=1327 audit(1769042110.127:961): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:35:10.172000 audit[6960]: USER_START pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.295280 kernel: audit: type=1105 audit(1769042110.172:962): pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.180000 audit[6963]: CRED_ACQ pid=6963 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.322993 kernel: audit: type=1103 audit(1769042110.180:963): pid=6963 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.590878 sshd[6963]: Connection closed by 10.0.0.1 port 38668 Jan 22 00:35:10.591668 sshd-session[6960]: pam_unix(sshd:session): session closed for user core Jan 22 00:35:10.592000 audit[6960]: USER_END pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.601772 systemd[1]: sshd@31-10.0.0.25:22-10.0.0.1:38668.service: Deactivated successfully. Jan 22 00:35:10.615778 systemd[1]: session-32.scope: Deactivated successfully. Jan 22 00:35:10.621572 systemd-logind[1609]: Session 32 logged out. Waiting for processes to exit. Jan 22 00:35:10.625463 systemd-logind[1609]: Removed session 32. Jan 22 00:35:10.632472 kernel: audit: type=1106 audit(1769042110.592:964): pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.632562 kernel: audit: type=1104 audit(1769042110.593:965): pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.593000 audit[6960]: CRED_DISP pid=6960 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 22 00:35:10.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.25:22-10.0.0.1:38668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:35:10.995570 containerd[1635]: time="2026-01-22T00:35:10.995332437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:35:11.077604 containerd[1635]: time="2026-01-22T00:35:11.077471988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:11.084139 containerd[1635]: time="2026-01-22T00:35:11.084033420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:35:11.084288 containerd[1635]: time="2026-01-22T00:35:11.084168010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:11.085099 kubelet[2961]: E0122 00:35:11.084779 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:11.085556 kubelet[2961]: E0122 00:35:11.085115 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:35:11.085556 kubelet[2961]: E0122 00:35:11.085214 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:11.090210 containerd[1635]: time="2026-01-22T00:35:11.088185000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:35:11.156992 containerd[1635]: time="2026-01-22T00:35:11.156722860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:35:11.162282 containerd[1635]: time="2026-01-22T00:35:11.162038319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:35:11.162282 containerd[1635]: time="2026-01-22T00:35:11.162157922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:35:11.162477 kubelet[2961]: E0122 00:35:11.162429 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:11.162547 kubelet[2961]: E0122 00:35:11.162490 2961 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:35:11.163629 kubelet[2961]: E0122 00:35:11.162583 2961 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kfg9f_calico-system(d3f33826-c9a7-4e28-a985-814cedd1e52b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:35:11.163629 kubelet[2961]: E0122 00:35:11.162656 2961 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kfg9f" podUID="d3f33826-c9a7-4e28-a985-814cedd1e52b"