Jan 28 01:59:35.492965 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 01:59:35.493005 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:59:35.493024 kernel: BIOS-provided physical RAM map: Jan 28 01:59:35.493036 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 01:59:35.493044 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 01:59:35.493055 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 01:59:35.493067 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 01:59:35.493078 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 01:59:35.493087 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:59:35.493098 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 01:59:35.493112 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:59:35.493123 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 01:59:35.493133 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:59:35.493144 kernel: NX (Execute Disable) protection: active Jan 28 01:59:35.493157 kernel: APIC: Static calls initialized Jan 28 01:59:35.493171 kernel: SMBIOS 2.8 present. Jan 28 01:59:35.493183 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 01:59:35.493194 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:59:35.493206 kernel: Hypervisor detected: KVM Jan 28 01:59:35.493215 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:59:35.493227 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:59:35.493238 kernel: kvm-clock: using sched offset of 9742603239 cycles Jan 28 01:59:35.493251 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:59:35.493313 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:59:35.493335 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:59:35.493346 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:59:35.493358 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 01:59:35.493369 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 01:59:35.493381 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:59:35.493392 kernel: Using GB pages for direct mapping Jan 28 01:59:35.493403 kernel: ACPI: Early table checksum verification disabled Jan 28 01:59:35.493418 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 01:59:35.493429 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493440 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493451 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493461 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 01:59:35.493471 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493481 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493494 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493505 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:59:35.493519 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 01:59:35.493531 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 01:59:35.493543 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 01:59:35.493558 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 01:59:35.493569 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 01:59:35.493579 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 01:59:35.493590 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 01:59:35.493600 kernel: No NUMA configuration found Jan 28 01:59:35.493611 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 01:59:35.493622 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 28 01:59:35.493635 kernel: Zone ranges: Jan 28 01:59:35.493646 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:59:35.493656 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 01:59:35.493667 kernel: Normal empty Jan 28 01:59:35.493678 kernel: Device empty Jan 28 01:59:35.493689 kernel: Movable zone start for each node Jan 28 01:59:35.493699 kernel: Early memory node ranges Jan 28 01:59:35.493712 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 01:59:35.493724 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 01:59:35.493737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 01:59:35.493748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:59:35.493761 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 01:59:35.493773 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 01:59:35.493785 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:59:35.493798 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:59:35.493815 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:59:35.493827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:59:35.493911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:59:35.493925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:59:35.493938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:59:35.493950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:59:35.493963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:59:35.493980 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:59:35.493992 kernel: TSC deadline timer available Jan 28 01:59:35.494005 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:59:35.494016 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:59:35.494029 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:59:35.494042 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:59:35.494053 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:59:35.494069 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:59:35.494089 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:59:35.494101 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:59:35.494114 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:59:35.494126 kernel: kvm-guest: setup PV sched yield Jan 28 01:59:35.494138 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 01:59:35.494149 kernel: Booting paravirtualized kernel on KVM Jan 28 01:59:35.494163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:59:35.494178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:59:35.494191 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:59:35.494203 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:59:35.494217 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:59:35.494227 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:59:35.494240 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:59:35.494253 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:59:35.495416 kernel: random: crng init done Jan 28 01:59:35.495430 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:59:35.495443 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:59:35.495454 kernel: Fallback order for Node 0: 0 Jan 28 01:59:35.495467 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 28 01:59:35.495479 kernel: Policy zone: DMA32 Jan 28 01:59:35.495497 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:59:35.495509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:59:35.495521 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 01:59:35.495534 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:59:35.495545 kernel: Dynamic Preempt: voluntary Jan 28 01:59:35.495558 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:59:35.495571 kernel: rcu: RCU event tracing is enabled. Jan 28 01:59:35.495584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:59:35.495602 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:59:35.495614 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:59:35.495628 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:59:35.495639 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:59:35.495653 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:59:35.495664 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:59:35.495677 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:59:35.495694 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:59:35.495705 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:59:35.495718 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:59:35.495742 kernel: Console: colour VGA+ 80x25 Jan 28 01:59:35.495758 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:59:35.495770 kernel: ACPI: Core revision 20240827 Jan 28 01:59:35.495784 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:59:35.495797 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:59:35.495810 kernel: x2apic enabled Jan 28 01:59:35.495826 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:59:35.496484 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:59:35.496501 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:59:35.496515 kernel: kvm-guest: setup PV IPIs Jan 28 01:59:35.496534 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:59:35.496547 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:59:35.496560 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:59:35.496573 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:59:35.496586 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:59:35.496599 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:59:35.496611 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:59:35.496628 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:59:35.496641 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:59:35.496653 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:59:35.496667 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:59:35.496681 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:59:35.496694 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:59:35.496707 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:59:35.496724 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:59:35.496737 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:59:35.496750 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:59:35.496763 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:59:35.496777 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:59:35.496788 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:59:35.496802 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:59:35.496820 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:59:35.496833 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:59:35.496927 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:59:35.496941 kernel: landlock: Up and running. Jan 28 01:59:35.496953 kernel: SELinux: Initializing. Jan 28 01:59:35.496966 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:59:35.496979 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:59:35.496997 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:59:35.497010 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:59:35.497023 kernel: signal: max sigframe size: 1776 Jan 28 01:59:35.497035 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:59:35.497050 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:59:35.497062 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:59:35.497075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:59:35.497092 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:59:35.497104 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:59:35.497117 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:59:35.497130 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:59:35.497142 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:59:35.497158 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 28 01:59:35.497170 kernel: devtmpfs: initialized Jan 28 01:59:35.497187 kernel: x86/mm: Memory block size: 128MB Jan 28 01:59:35.497201 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:59:35.497213 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:59:35.497226 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:59:35.497240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:59:35.497251 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:59:35.497313 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:59:35.497333 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:59:35.497346 kernel: audit: type=2000 audit(1769565540.363:1): state=initialized audit_enabled=0 res=1 Jan 28 01:59:35.497391 kernel: cpuidle: using governor menu Jan 28 01:59:35.497403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:59:35.497414 kernel: dca service started, version 1.12.1 Jan 28 01:59:35.497426 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 01:59:35.497437 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:59:35.497451 kernel: PCI: Using configuration type 1 for base access Jan 28 01:59:35.497462 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:59:35.497476 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:59:35.497487 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:59:35.497498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:59:35.497510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:59:35.497521 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:59:35.497534 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:59:35.497546 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:59:35.497557 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:59:35.497567 kernel: ACPI: Interpreter enabled Jan 28 01:59:35.497578 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:59:35.497589 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:59:35.497601 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:59:35.497614 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:59:35.497625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:59:35.497637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:59:35.498062 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:59:35.500473 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:59:35.500731 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:59:35.500754 kernel: PCI host bridge to bus 0000:00 Jan 28 01:59:35.501055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:59:35.501545 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:59:35.501761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:59:35.502049 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:59:35.502306 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:59:35.502526 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 01:59:35.502734 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:59:35.503055 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:59:35.503352 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:59:35.503585 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 28 01:59:35.503814 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 28 01:59:35.504116 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 28 01:59:35.504957 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:59:35.505231 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 23437 usecs Jan 28 01:59:35.507618 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:59:35.507969 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 01:59:35.508245 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 28 01:59:35.508539 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 01:59:35.508786 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:59:35.509114 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 28 01:59:35.510511 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 28 01:59:35.510753 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 01:59:35.511071 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:59:35.511350 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 28 01:59:35.511584 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 28 01:59:35.511817 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 01:59:35.512114 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 28 01:59:35.514467 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:59:35.514699 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:59:35.515003 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Jan 28 01:59:35.515235 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:59:35.515551 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 28 01:59:35.515820 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 28 01:59:35.516157 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:59:35.517538 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 01:59:35.517560 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:59:35.517574 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:59:35.517586 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:59:35.517598 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:59:35.517615 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:59:35.517628 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:59:35.517640 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:59:35.517651 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:59:35.517662 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:59:35.517674 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:59:35.517685 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:59:35.517701 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:59:35.517713 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:59:35.517725 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:59:35.517737 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:59:35.517750 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:59:35.517761 kernel: iommu: Default domain type: Translated Jan 28 01:59:35.517773 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:59:35.517787 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:59:35.517799 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:59:35.517811 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 01:59:35.517823 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 01:59:35.518163 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:59:35.518508 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:59:35.518780 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:59:35.518805 kernel: vgaarb: loaded Jan 28 01:59:35.518820 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:59:35.518833 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:59:35.518924 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:59:35.518938 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:59:35.518951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:59:35.518964 kernel: pnp: PnP ACPI init Jan 28 01:59:35.519260 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:59:35.521376 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:59:35.521390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:59:35.521404 kernel: NET: Registered PF_INET protocol family Jan 28 01:59:35.521418 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:59:35.521431 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:59:35.521445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:59:35.521463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:59:35.521476 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:59:35.521491 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:59:35.521502 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:59:35.521516 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:59:35.521528 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:59:35.521541 kernel: NET: Registered PF_XDP protocol family Jan 28 01:59:35.521809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:59:35.522142 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:59:35.523548 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:59:35.523802 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:59:35.524165 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:59:35.524470 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 01:59:35.524497 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:59:35.524513 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:59:35.524526 kernel: Initialise system trusted keyrings Jan 28 01:59:35.524540 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:59:35.524554 kernel: Key type asymmetric registered Jan 28 01:59:35.524566 kernel: Asymmetric key parser 'x509' registered Jan 28 01:59:35.524579 kernel: hrtimer: interrupt took 11157029 ns Jan 28 01:59:35.524594 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:59:35.524610 kernel: io scheduler mq-deadline registered Jan 28 01:59:35.524624 kernel: io scheduler kyber registered Jan 28 01:59:35.524639 kernel: io scheduler bfq registered Jan 28 01:59:35.524650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:59:35.524665 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:59:35.524680 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:59:35.524691 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:59:35.524708 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:59:35.524722 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:59:35.524734 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:59:35.524748 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:59:35.524761 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:59:35.525121 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:59:35.525146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 28 01:59:35.527538 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:59:35.527772 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:59:17 UTC (1769565557) Jan 28 01:59:35.528758 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:59:35.528780 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:59:35.528793 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:59:35.528805 kernel: Segment Routing with IPv6 Jan 28 01:59:35.528822 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:59:35.528835 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:59:35.528930 kernel: Key type dns_resolver registered Jan 28 01:59:35.528943 kernel: IPI shorthand broadcast: enabled Jan 28 01:59:35.528955 kernel: sched_clock: Marking stable (11001058360, 3817638937)->(17980937601, -3162240304) Jan 28 01:59:35.528968 kernel: registered taskstats version 1 Jan 28 01:59:35.528980 kernel: Loading compiled-in X.509 certificates Jan 28 01:59:35.528996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 01:59:35.529008 kernel: Demotion targets for Node 0: null Jan 28 01:59:35.529021 kernel: Key type .fscrypt registered Jan 28 01:59:35.529032 kernel: Key type fscrypt-provisioning registered Jan 28 01:59:35.529045 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:59:35.529058 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:59:35.529071 kernel: ima: No architecture policies found Jan 28 01:59:35.529086 kernel: clk: Disabling unused clocks Jan 28 01:59:35.529099 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:59:35.529111 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:59:35.529124 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 01:59:35.529136 kernel: Run /init as init process Jan 28 01:59:35.529149 kernel: with arguments: Jan 28 01:59:35.529162 kernel: /init Jan 28 01:59:35.529177 kernel: with environment: Jan 28 01:59:35.529189 kernel: HOME=/ Jan 28 01:59:35.529202 kernel: TERM=linux Jan 28 01:59:35.529214 kernel: SCSI subsystem initialized Jan 28 01:59:35.529227 kernel: libata version 3.00 loaded. Jan 28 01:59:35.530672 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:59:35.530698 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:59:35.531048 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:59:35.531368 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:59:35.531638 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:59:35.534435 kernel: scsi host0: ahci Jan 28 01:59:35.534761 kernel: scsi host1: ahci Jan 28 01:59:35.535144 kernel: scsi host2: ahci Jan 28 01:59:35.535498 kernel: scsi host3: ahci Jan 28 01:59:35.535796 kernel: scsi host4: ahci Jan 28 01:59:35.537410 kernel: scsi host5: ahci Jan 28 01:59:35.537433 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 28 01:59:35.537449 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 28 01:59:35.537471 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 28 01:59:35.537483 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 28 01:59:35.537496 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 28 01:59:35.537511 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 28 01:59:35.537524 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:59:35.537536 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:59:35.537548 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:59:35.537563 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:59:35.537575 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:59:35.537587 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:59:35.537599 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:59:35.537612 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:59:35.537625 kernel: ata3.00: applying bridge limits Jan 28 01:59:35.537637 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:59:35.537649 kernel: ata3.00: configured for UDMA/100 Jan 28 01:59:35.538001 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:59:35.538250 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:59:35.540612 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 28 01:59:35.540633 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:59:35.540647 kernel: GPT:16515071 != 27000831 Jan 28 01:59:35.540665 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:59:35.541010 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:59:35.541031 kernel: GPT:16515071 != 27000831 Jan 28 01:59:35.541043 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:59:35.541056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:59:35.541069 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:59:35.541382 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:59:35.541406 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:59:35.541420 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:59:35.541433 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:59:35.541446 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:59:35.541459 kernel: raid6: avx2x4 gen() 16630 MB/s Jan 28 01:59:35.541472 kernel: raid6: avx2x2 gen() 12587 MB/s Jan 28 01:59:35.541484 kernel: raid6: avx2x1 gen() 6830 MB/s Jan 28 01:59:35.541499 kernel: raid6: using algorithm avx2x4 gen() 16630 MB/s Jan 28 01:59:35.541512 kernel: raid6: .... xor() 4072 MB/s, rmw enabled Jan 28 01:59:35.541525 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:59:35.541538 kernel: xor: automatically using best checksumming function avx Jan 28 01:59:35.541553 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 6101322442 wd_nsec: 6101322469 Jan 28 01:59:35.541568 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:59:35.541581 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (182) Jan 28 01:59:35.541594 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 01:59:35.541606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:59:35.541619 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:59:35.541631 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:59:35.541643 kernel: loop: module loaded Jan 28 01:59:35.541658 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:59:35.541670 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:59:35.541683 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:59:35.541700 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:59:35.541714 systemd[1]: Detected virtualization kvm. Jan 28 01:59:35.541727 systemd[1]: Detected architecture x86-64. Jan 28 01:59:35.541742 systemd[1]: Running in initrd. Jan 28 01:59:35.541754 systemd[1]: No hostname configured, using default hostname. Jan 28 01:59:35.541769 systemd[1]: Hostname set to . Jan 28 01:59:35.541784 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 01:59:35.541798 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:59:35.541811 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:59:35.541829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:59:35.541924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:59:35.541941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:59:35.541955 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:59:35.541970 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:59:35.541984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:59:35.542002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:59:35.542016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:59:35.542030 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:59:35.542044 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:59:35.542059 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:59:35.542073 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:59:35.542090 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:59:35.542104 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:59:35.542120 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:59:35.542133 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:59:35.542148 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:59:35.542163 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:59:35.542177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:59:35.542195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:59:35.542209 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:59:35.542223 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:59:35.542239 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:59:35.542254 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:59:35.543085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:59:35.543105 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:59:35.543125 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:59:35.543139 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:59:35.543153 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:59:35.543167 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:59:35.543184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:59:35.543198 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:59:35.543212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:59:35.543226 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:59:35.543328 systemd-journald[320]: Collecting audit messages is enabled. Jan 28 01:59:35.543367 kernel: audit: type=1130 audit(1769565575.482:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:35.543384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:59:35.543399 systemd-journald[320]: Journal started Jan 28 01:59:35.543429 systemd-journald[320]: Runtime Journal (/run/log/journal/3e915e9ddd5c45c0bdd691fa2cfd06dd) is 6M, max 48.2M, 42.1M free. Jan 28 01:59:35.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:35.606611 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:59:35.606675 kernel: audit: type=1130 audit(1769565575.603:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:35.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:35.617089 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:59:35.741178 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:59:36.431475 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:59:36.431536 kernel: Bridge firewalling registered Jan 28 01:59:36.431559 kernel: audit: type=1130 audit(1769565576.300:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:35.941134 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 28 01:59:36.535811 kernel: audit: type=1130 audit(1769565576.429:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.379995 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:59:36.497049 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:59:36.516779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:59:36.793471 kernel: audit: type=1130 audit(1769565576.592:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.793519 kernel: audit: type=1130 audit(1769565576.592:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:36.594191 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:59:36.694722 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:59:36.811511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:59:36.940199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:59:37.103534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:59:37.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.148509 kernel: audit: type=1130 audit(1769565577.101:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.147924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:59:37.242653 kernel: audit: type=1130 audit(1769565577.197:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.243788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:59:37.319758 kernel: audit: type=1130 audit(1769565577.283:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.326580 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:59:37.388238 kernel: audit: type=1334 audit(1769565577.348:11): prog-id=6 op=LOAD Jan 28 01:59:37.348000 audit: BPF prog-id=6 op=LOAD Jan 28 01:59:37.359115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:59:37.533023 dracut-cmdline[355]: dracut-109 Jan 28 01:59:37.585936 dracut-cmdline[355]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:59:37.720965 systemd-resolved[356]: Positive Trust Anchors: Jan 28 01:59:37.721007 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:59:37.721015 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:59:37.721059 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:59:37.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:37.890001 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 28 01:59:37.892624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:59:37.906705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:59:39.035619 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:59:39.154704 kernel: iscsi: registered transport (tcp) Jan 28 01:59:39.254414 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:59:39.254509 kernel: QLogic iSCSI HBA Driver Jan 28 01:59:39.486942 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:59:39.616432 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:59:39.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:39.698779 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:59:40.048961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:59:40.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:40.093046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:59:40.113573 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:59:40.342803 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:59:40.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:40.418000 audit: BPF prog-id=7 op=LOAD Jan 28 01:59:40.418000 audit: BPF prog-id=8 op=LOAD Jan 28 01:59:40.434247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:59:40.932264 systemd-udevd[580]: Using default interface naming scheme 'v257'. Jan 28 01:59:41.148655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:59:41.344550 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 01:59:41.344591 kernel: audit: type=1130 audit(1769565581.219:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:41.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:41.233624 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:59:41.679486 dracut-pre-trigger[628]: rd.md=0: removing MD RAID activation Jan 28 01:59:42.049275 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:59:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:42.107521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:59:42.210488 kernel: audit: type=1130 audit(1769565582.094:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:42.293949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:59:42.458788 kernel: audit: type=1130 audit(1769565582.320:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:42.459269 kernel: audit: type=1334 audit(1769565582.336:21): prog-id=9 op=LOAD Jan 28 01:59:42.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:42.336000 audit: BPF prog-id=9 op=LOAD Jan 28 01:59:42.343122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:59:42.990644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:59:43.330570 kernel: audit: type=1130 audit(1769565583.030:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:43.330606 kernel: audit: type=1130 audit(1769565583.113:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:43.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:43.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:43.052141 systemd-networkd[725]: lo: Link UP Jan 28 01:59:43.052149 systemd-networkd[725]: lo: Gained carrier Jan 28 01:59:43.104113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:59:43.114665 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:59:43.117104 systemd[1]: Reached target network.target - Network. Jan 28 01:59:43.723217 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:59:43.918738 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:59:44.033061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:59:44.131985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:59:44.288762 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:59:44.515286 kernel: audit: type=1131 audit(1769565584.342:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:44.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:44.305518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:59:44.305643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:59:44.342802 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:59:44.511806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:59:44.727173 disk-uuid[768]: Primary Header is updated. Jan 28 01:59:44.727173 disk-uuid[768]: Secondary Entries is updated. Jan 28 01:59:44.727173 disk-uuid[768]: Secondary Header is updated. Jan 28 01:59:44.850082 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:59:45.684438 kernel: AES CTR mode by8 optimization enabled Jan 28 01:59:45.712439 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:59:46.204690 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:59:46.204747 disk-uuid[769]: Warning: The kernel is still using the old partition table. Jan 28 01:59:46.204747 disk-uuid[769]: The new table will be used at the next reboot or after you Jan 28 01:59:46.204747 disk-uuid[769]: run partprobe(8) or kpartx(8) Jan 28 01:59:46.204747 disk-uuid[769]: The operation has completed successfully. Jan 28 01:59:45.712489 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:59:45.724996 systemd-networkd[725]: eth0: Link UP Jan 28 01:59:45.750833 systemd-networkd[725]: eth0: Gained carrier Jan 28 01:59:46.591563 kernel: audit: type=1130 audit(1769565586.425:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.591608 kernel: audit: type=1130 audit(1769565586.425:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:45.750961 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:59:45.911998 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:59:46.326099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:59:46.906698 kernel: audit: type=1130 audit(1769565586.700:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.906747 kernel: audit: type=1131 audit(1769565586.700:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:46.428146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:59:46.433748 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:59:46.433996 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:59:46.726730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:59:46.932284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:59:46.932438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:59:47.040794 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:59:47.248085 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:59:47.941730 systemd-networkd[725]: eth0: Gained IPv6LL Jan 28 01:59:48.100041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:59:48.289547 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Jan 28 01:59:48.289606 kernel: audit: type=1130 audit(1769565588.128:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:48.289629 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:59:48.289645 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:59:48.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:48.390533 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:59:48.390626 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:59:48.484172 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:59:48.543676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:59:48.614494 kernel: audit: type=1130 audit(1769565588.565:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:48.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:48.568623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:59:49.932611 ignition[878]: Ignition 2.24.0 Jan 28 01:59:49.932901 ignition[878]: Stage: fetch-offline Jan 28 01:59:49.933477 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:49.933498 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:49.933626 ignition[878]: parsed url from cmdline: "" Jan 28 01:59:49.933632 ignition[878]: no config URL provided Jan 28 01:59:49.933640 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:59:49.933654 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:59:49.933709 ignition[878]: op(1): [started] loading QEMU firmware config module Jan 28 01:59:49.933716 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:59:50.022408 ignition[878]: op(1): [finished] loading QEMU firmware config module Jan 28 01:59:50.022470 ignition[878]: QEMU firmware config was not found. Ignoring... Jan 28 01:59:50.023113 ignition[878]: parsing config with SHA512: 115de909b1bd19af16bd5270096839f04d9cdd2afd7bb010832daf5c167c4553c36bc074313264907934ce94ca6a22b2a766b6c2a8261ff924046dbbe6fded26 Jan 28 01:59:50.051119 unknown[878]: fetched base config from "system" Jan 28 01:59:50.051546 ignition[878]: fetch-offline: fetch-offline passed Jan 28 01:59:50.051139 unknown[878]: fetched user config from "qemu" Jan 28 01:59:50.053637 ignition[878]: Ignition finished successfully Jan 28 01:59:50.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.072055 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:59:50.144434 kernel: audit: type=1130 audit(1769565590.087:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.089581 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:59:50.098660 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:59:50.226484 ignition[888]: Ignition 2.24.0 Jan 28 01:59:50.234490 ignition[888]: Stage: kargs Jan 28 01:59:50.234810 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:50.234830 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:50.321447 kernel: audit: type=1130 audit(1769565590.270:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.257281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:59:50.243011 ignition[888]: kargs: kargs passed Jan 28 01:59:50.273683 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:59:50.243105 ignition[888]: Ignition finished successfully Jan 28 01:59:50.462236 ignition[895]: Ignition 2.24.0 Jan 28 01:59:50.462254 ignition[895]: Stage: disks Jan 28 01:59:50.474075 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:50.474096 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:50.474970 ignition[895]: disks: disks passed Jan 28 01:59:50.475045 ignition[895]: Ignition finished successfully Jan 28 01:59:50.546210 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:59:50.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.592482 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:59:50.642277 kernel: audit: type=1130 audit(1769565590.586:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:50.628806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:59:50.642046 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:59:50.661667 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:59:50.696979 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:59:50.773008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:59:51.053522 systemd-fsck[903]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 28 01:59:51.116917 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:59:51.174813 kernel: audit: type=1130 audit(1769565591.127:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:51.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:51.131760 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:59:51.700264 kernel: EXT4-fs (vda9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 01:59:51.701283 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:59:51.717685 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:59:51.754222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:59:51.787432 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:59:51.805695 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:59:51.805767 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:59:51.920591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (911) Jan 28 01:59:51.805805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:59:51.958046 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:59:51.958090 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:59:51.989581 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:59:52.010780 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:59:52.010813 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:59:52.022780 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:59:52.048953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:59:52.906994 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:59:52.981625 kernel: audit: type=1130 audit(1769565592.918:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:52.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:52.922696 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:59:52.964691 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:59:53.082411 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:59:53.118722 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:59:53.272600 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:59:53.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:53.339969 kernel: audit: type=1130 audit(1769565593.298:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:53.491110 ignition[1008]: INFO : Ignition 2.24.0 Jan 28 01:59:53.491110 ignition[1008]: INFO : Stage: mount Jan 28 01:59:53.526329 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:53.526329 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:53.580934 ignition[1008]: INFO : mount: mount passed Jan 28 01:59:53.580934 ignition[1008]: INFO : Ignition finished successfully Jan 28 01:59:53.611438 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:59:53.696813 kernel: audit: type=1130 audit(1769565593.631:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:53.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:53.642160 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:59:53.774684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:59:53.871307 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Jan 28 01:59:53.885394 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:59:53.885462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:59:53.968545 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:59:53.968627 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:59:53.984619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:59:54.108657 ignition[1038]: INFO : Ignition 2.24.0 Jan 28 01:59:54.108657 ignition[1038]: INFO : Stage: files Jan 28 01:59:54.122332 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:54.122332 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:54.122332 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:59:54.182758 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:59:54.182758 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:59:54.230291 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:59:54.250696 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:59:54.250696 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:59:54.248913 unknown[1038]: wrote ssh authorized keys file for user: core Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:59:54.296615 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:59:54.889060 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 28 01:59:56.025814 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:59:56.025814 ignition[1038]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 28 01:59:56.077567 ignition[1038]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:59:56.077567 ignition[1038]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:59:56.077567 ignition[1038]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 28 01:59:56.077567 ignition[1038]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:59:56.245710 ignition[1038]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:59:56.275629 ignition[1038]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:59:56.275629 ignition[1038]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:59:56.275629 ignition[1038]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:59:56.275629 ignition[1038]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:59:56.275629 ignition[1038]: INFO : files: files passed Jan 28 01:59:56.275629 ignition[1038]: INFO : Ignition finished successfully Jan 28 01:59:56.282021 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:59:56.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.392463 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:59:56.435055 kernel: audit: type=1130 audit(1769565596.383:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.476814 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:59:56.574480 kernel: audit: type=1130 audit(1769565596.521:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.574525 kernel: audit: type=1131 audit(1769565596.521:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.517597 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:59:56.524999 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:59:56.586663 initrd-setup-root-after-ignition[1068]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:59:56.600358 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:59:56.600358 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:59:56.620226 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:59:56.628508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:59:56.660527 kernel: audit: type=1130 audit(1769565596.636:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.640957 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:59:56.681066 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:59:56.898923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:59:56.985092 kernel: audit: type=1130 audit(1769565596.918:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.985130 kernel: audit: type=1131 audit(1769565596.918:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:56.899168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:59:56.919594 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:59:56.957690 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:59:57.038357 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:59:57.068929 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:59:57.181416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:59:57.265272 kernel: audit: type=1130 audit(1769565597.200:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.251058 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:59:57.344554 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:59:57.345055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:59:57.358131 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:59:57.367784 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:59:57.416674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:59:57.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.417031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:59:57.442421 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:59:57.458705 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:59:57.471777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:59:57.487730 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:59:57.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.489006 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:59:57.489151 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:59:57.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.489302 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:59:57.489495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:59:57.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:57.489651 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:59:57.489789 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:59:57.490011 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:59:57.490128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:59:57.490323 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:59:57.501271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:59:57.800541 ignition[1094]: INFO : Ignition 2.24.0 Jan 28 01:59:57.800541 ignition[1094]: INFO : Stage: umount Jan 28 01:59:57.800541 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:59:57.800541 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:59:57.502556 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:59:57.808999 ignition[1094]: INFO : umount: umount passed Jan 28 01:59:57.808999 ignition[1094]: INFO : Ignition finished successfully Jan 28 01:59:57.510668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:59:57.515779 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:59:57.534101 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:59:57.534309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:59:57.541022 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:59:57.541190 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:59:57.541701 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:59:57.548252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:59:57.554820 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:59:57.651143 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:59:57.665017 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:59:57.665723 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:59:57.665988 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:59:57.666568 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:59:57.666698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:59:57.672614 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:59:57.672722 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:59:57.673010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:59:57.673176 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:59:57.673468 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:59:57.673600 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:59:57.679792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:59:57.721210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:59:57.752958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:59:57.753261 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:59:57.774421 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:59:57.774624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:59:58.029006 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 01:59:58.029096 kernel: audit: type=1131 audit(1769565598.009:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.010764 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:59:58.011156 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:59:58.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.101260 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:59:58.125053 kernel: audit: type=1131 audit(1769565598.074:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.108458 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:59:58.202980 kernel: audit: type=1131 audit(1769565598.130:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.203024 kernel: audit: type=1131 audit(1769565598.130:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.108680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:59:58.131484 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:59:58.131651 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:59:58.149963 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:59:58.150129 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:59:58.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.285805 systemd[1]: Stopped target network.target - Network. Jan 28 01:59:58.357723 kernel: audit: type=1130 audit(1769565598.275:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.357762 kernel: audit: type=1131 audit(1769565598.275:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.338194 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:59:58.338334 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:59:58.580297 kernel: audit: type=1131 audit(1769565598.372:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.580339 kernel: audit: type=1131 audit(1769565598.443:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.580361 kernel: audit: type=1131 audit(1769565598.484:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.580439 kernel: audit: type=1131 audit(1769565598.484:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.375080 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:59:58.375192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:59:58.443984 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:59:58.444104 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:59:58.485721 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:59:58.485831 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:59:58.492094 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:59:58.492185 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:59:58.541953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:59:58.553219 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:59:58.579545 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:59:58.579722 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:59:58.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.701048 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:59:58.703704 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:59:58.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.726968 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:59:58.726000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:59:58.726000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:59:58.744600 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:59:58.744762 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:59:58.790976 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:59:58.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.807589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:59:58.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.807720 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:59:58.819676 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:59:58.819774 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:59:58.835835 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:59:58.836028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:59:58.842737 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:59:58.929541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:59:58.929923 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:59:58.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:58.949487 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:59:58.949635 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:59:58.973790 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:59:58.974098 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:59:59.010470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:59:59.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.010635 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:59:59.029455 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:59:59.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.029569 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:59:59.037688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:59:59.037779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:59:59.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.070073 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:59:59.097361 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:59:59.097554 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:59:59.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.097766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:59:59.097834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:59:59.098109 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 01:59:59.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.098178 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:59:59.098314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:59:59.104260 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:59:59.115056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:59:59.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.115576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:59:59.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.126597 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:59:59.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.128771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:59:59.386011 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:59:59.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:59:59.386261 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:59:59.403973 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:59:59.410484 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:59:59.492635 systemd[1]: Switching root. Jan 28 01:59:59.560912 systemd-journald[320]: Journal stopped Jan 28 02:00:03.798066 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 28 02:00:03.798140 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 02:00:03.798160 kernel: SELinux: policy capability open_perms=1 Jan 28 02:00:03.798181 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 02:00:03.798197 kernel: SELinux: policy capability always_check_network=0 Jan 28 02:00:03.798221 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 02:00:03.798240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 02:00:03.798260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 02:00:03.798281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 02:00:03.798301 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 02:00:03.798324 systemd[1]: Successfully loaded SELinux policy in 171.338ms. Jan 28 02:00:03.798348 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.311ms. Jan 28 02:00:03.798366 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 02:00:03.798383 systemd[1]: Detected virtualization kvm. Jan 28 02:00:03.798444 systemd[1]: Detected architecture x86-64. Jan 28 02:00:03.798462 systemd[1]: Detected first boot. Jan 28 02:00:03.798479 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 02:00:03.798499 zram_generator::config[1137]: No configuration found. Jan 28 02:00:03.798517 kernel: Guest personality initialized and is inactive Jan 28 02:00:03.798538 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 02:00:03.798553 kernel: Initialized host personality Jan 28 02:00:03.798569 kernel: NET: Registered PF_VSOCK protocol family Jan 28 02:00:03.798585 systemd[1]: Populated /etc with preset unit settings. Jan 28 02:00:03.798602 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 02:00:03.798621 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 02:00:03.798641 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 02:00:03.798662 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 02:00:03.798685 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 02:00:03.798710 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 02:00:03.798727 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 02:00:03.798744 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 02:00:03.798760 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 02:00:03.798778 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 02:00:03.798794 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 02:00:03.798810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:00:03.798833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:00:03.798935 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 02:00:03.798955 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 02:00:03.798972 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 02:00:03.798990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:00:03.799007 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 02:00:03.799029 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:00:03.799045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:00:03.799062 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 02:00:03.799079 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 02:00:03.799095 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 02:00:03.799115 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 02:00:03.799133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:00:03.799154 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:00:03.799170 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 02:00:03.799187 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:00:03.799204 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:00:03.799220 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 02:00:03.799237 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 02:00:03.799254 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 02:00:03.799273 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 02:00:03.799292 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 02:00:03.799310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:00:03.799327 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 02:00:03.799344 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 02:00:03.799360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:00:03.799377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:00:03.799441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 02:00:03.799469 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 02:00:03.799487 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 02:00:03.799504 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 02:00:03.799521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:00:03.799537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 02:00:03.799555 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 02:00:03.799582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 02:00:03.799605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 02:00:03.799626 systemd[1]: Reached target machines.target - Containers. Jan 28 02:00:03.799647 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 02:00:03.799670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:00:03.799691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:00:03.799712 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 02:00:03.799740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:00:03.799762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:00:03.799782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:00:03.799802 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 02:00:03.799823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:00:03.799928 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 02:00:03.799952 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 02:00:03.799980 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 02:00:03.800000 kernel: kauditd_printk_skb: 37 callbacks suppressed Jan 28 02:00:03.800023 kernel: audit: type=1131 audit(1769565603.525:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.800043 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 02:00:03.800065 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 02:00:03.800085 kernel: audit: type=1131 audit(1769565603.567:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.800110 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 02:00:03.800134 kernel: audit: type=1334 audit(1769565603.591:101): prog-id=14 op=UNLOAD Jan 28 02:00:03.800154 kernel: audit: type=1334 audit(1769565603.591:102): prog-id=13 op=UNLOAD Jan 28 02:00:03.800173 kernel: audit: type=1334 audit(1769565603.605:103): prog-id=15 op=LOAD Jan 28 02:00:03.800196 kernel: audit: type=1334 audit(1769565603.615:104): prog-id=16 op=LOAD Jan 28 02:00:03.800216 kernel: audit: type=1334 audit(1769565603.630:105): prog-id=17 op=LOAD Jan 28 02:00:03.800234 kernel: ACPI: bus type drm_connector registered Jan 28 02:00:03.800253 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:00:03.800274 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:00:03.800295 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 02:00:03.800315 kernel: fuse: init (API version 7.41) Jan 28 02:00:03.800340 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 02:00:03.800360 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 02:00:03.800381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:00:03.800451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:00:03.800471 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 02:00:03.800487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 02:00:03.800531 systemd-journald[1224]: Collecting audit messages is enabled. Jan 28 02:00:03.800579 kernel: audit: type=1305 audit(1769565603.794:106): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 02:00:03.800603 kernel: audit: type=1300 audit(1769565603.794:106): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff5fb4f130 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:03.800624 kernel: audit: type=1327 audit(1769565603.794:106): proctitle="/usr/lib/systemd/systemd-journald" Jan 28 02:00:03.800643 systemd-journald[1224]: Journal started Jan 28 02:00:03.800685 systemd-journald[1224]: Runtime Journal (/run/log/journal/3e915e9ddd5c45c0bdd691fa2cfd06dd) is 6M, max 48.2M, 42.1M free. Jan 28 02:00:02.753000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 02:00:03.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.591000 audit: BPF prog-id=14 op=UNLOAD Jan 28 02:00:03.591000 audit: BPF prog-id=13 op=UNLOAD Jan 28 02:00:03.605000 audit: BPF prog-id=15 op=LOAD Jan 28 02:00:03.615000 audit: BPF prog-id=16 op=LOAD Jan 28 02:00:03.630000 audit: BPF prog-id=17 op=LOAD Jan 28 02:00:03.794000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 02:00:03.794000 audit[1224]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff5fb4f130 a2=4000 a3=0 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:03.794000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 02:00:02.066814 systemd[1]: Queued start job for default target multi-user.target. Jan 28 02:00:02.124524 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 02:00:02.129782 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 02:00:02.134319 systemd[1]: systemd-journald.service: Consumed 2.047s CPU time. Jan 28 02:00:03.849434 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:00:03.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.855281 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 02:00:03.859625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 02:00:03.864804 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 02:00:03.872538 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 02:00:03.879550 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 02:00:03.887333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:00:03.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.898065 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 02:00:03.898654 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 02:00:03.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.911707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:00:03.913807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:00:03.926163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:00:03.926604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:00:03.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.935473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:00:03.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.936821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:00:03.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.944384 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 02:00:03.944725 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 02:00:03.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.950822 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:00:03.951553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:00:03.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.961193 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:00:03.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.972149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:00:03.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.986830 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 02:00:03.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:03.998541 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 02:00:04.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.010625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:00:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.041318 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 02:00:04.051671 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 02:00:04.065251 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 02:00:04.076653 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 02:00:04.091643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 02:00:04.091762 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:00:04.106512 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 02:00:04.126485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:00:04.126718 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 02:00:04.145064 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 02:00:04.167523 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 02:00:04.185815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:00:04.194106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 02:00:04.206685 systemd-journald[1224]: Time spent on flushing to /var/log/journal/3e915e9ddd5c45c0bdd691fa2cfd06dd is 65.036ms for 1121 entries. Jan 28 02:00:04.206685 systemd-journald[1224]: System Journal (/var/log/journal/3e915e9ddd5c45c0bdd691fa2cfd06dd) is 8M, max 163.5M, 155.5M free. Jan 28 02:00:04.316752 systemd-journald[1224]: Received client request to flush runtime journal. Jan 28 02:00:04.222971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:00:04.232746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:00:04.277081 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 02:00:04.300716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:00:04.314141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 02:00:04.323455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 02:00:04.339593 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 02:00:04.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.348577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 02:00:04.364032 kernel: loop1: detected capacity change from 0 to 224512 Jan 28 02:00:04.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.390089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:00:04.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.407119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 02:00:04.427652 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 02:00:04.476153 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 28 02:00:04.476582 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 28 02:00:04.494064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:00:04.512080 kernel: loop2: detected capacity change from 0 to 50784 Jan 28 02:00:04.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.519101 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 02:00:04.569157 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 02:00:04.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.655461 kernel: loop3: detected capacity change from 0 to 111560 Jan 28 02:00:04.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.683819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 02:00:04.699000 audit: BPF prog-id=18 op=LOAD Jan 28 02:00:04.699000 audit: BPF prog-id=19 op=LOAD Jan 28 02:00:04.699000 audit: BPF prog-id=20 op=LOAD Jan 28 02:00:04.704342 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 02:00:04.725000 audit: BPF prog-id=21 op=LOAD Jan 28 02:00:04.731117 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:00:04.753362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:00:04.768268 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 02:00:04.781000 audit: BPF prog-id=22 op=LOAD Jan 28 02:00:04.781000 audit: BPF prog-id=23 op=LOAD Jan 28 02:00:04.781000 audit: BPF prog-id=24 op=LOAD Jan 28 02:00:04.784273 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 02:00:04.795933 kernel: loop4: detected capacity change from 0 to 224512 Jan 28 02:00:04.795000 audit: BPF prog-id=25 op=LOAD Jan 28 02:00:04.796000 audit: BPF prog-id=26 op=LOAD Jan 28 02:00:04.796000 audit: BPF prog-id=27 op=LOAD Jan 28 02:00:04.800758 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 02:00:04.839315 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 28 02:00:04.839963 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 28 02:00:04.844019 kernel: loop5: detected capacity change from 0 to 50784 Jan 28 02:00:04.857707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:00:04.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.913935 kernel: loop6: detected capacity change from 0 to 111560 Jan 28 02:00:04.918190 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 02:00:04.932356 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 02:00:04.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.946645 (sd-merge)[1287]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 28 02:00:04.946817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 02:00:04.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:04.959965 (sd-merge)[1287]: Merged extensions into '/usr'. Jan 28 02:00:04.973248 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 02:00:04.973277 systemd[1]: Reloading... Jan 28 02:00:05.175307 zram_generator::config[1331]: No configuration found. Jan 28 02:00:05.197234 systemd-resolved[1284]: Positive Trust Anchors: Jan 28 02:00:05.197303 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:00:05.197311 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 02:00:05.197355 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:00:05.219475 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Jan 28 02:00:05.230225 systemd-resolved[1284]: Defaulting to hostname 'linux'. Jan 28 02:00:05.766182 systemd[1]: Reloading finished in 791 ms. Jan 28 02:00:05.822018 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 02:00:05.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:05.842628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:00:05.864399 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 02:00:05.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:05.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:05.882332 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 02:00:05.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:05.928989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:00:05.966954 systemd[1]: Starting ensure-sysext.service... Jan 28 02:00:05.981383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:00:06.005000 audit: BPF prog-id=8 op=UNLOAD Jan 28 02:00:06.005000 audit: BPF prog-id=7 op=UNLOAD Jan 28 02:00:06.005000 audit: BPF prog-id=28 op=LOAD Jan 28 02:00:06.005000 audit: BPF prog-id=29 op=LOAD Jan 28 02:00:06.027175 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:00:06.050000 audit: BPF prog-id=30 op=LOAD Jan 28 02:00:06.050000 audit: BPF prog-id=15 op=UNLOAD Jan 28 02:00:06.050000 audit: BPF prog-id=31 op=LOAD Jan 28 02:00:06.050000 audit: BPF prog-id=32 op=LOAD Jan 28 02:00:06.050000 audit: BPF prog-id=16 op=UNLOAD Jan 28 02:00:06.050000 audit: BPF prog-id=17 op=UNLOAD Jan 28 02:00:06.055000 audit: BPF prog-id=33 op=LOAD Jan 28 02:00:06.055000 audit: BPF prog-id=18 op=UNLOAD Jan 28 02:00:06.055000 audit: BPF prog-id=34 op=LOAD Jan 28 02:00:06.055000 audit: BPF prog-id=35 op=LOAD Jan 28 02:00:06.055000 audit: BPF prog-id=19 op=UNLOAD Jan 28 02:00:06.055000 audit: BPF prog-id=20 op=UNLOAD Jan 28 02:00:06.065000 audit: BPF prog-id=36 op=LOAD Jan 28 02:00:06.065000 audit: BPF prog-id=22 op=UNLOAD Jan 28 02:00:06.065000 audit: BPF prog-id=37 op=LOAD Jan 28 02:00:06.065000 audit: BPF prog-id=38 op=LOAD Jan 28 02:00:06.065000 audit: BPF prog-id=23 op=UNLOAD Jan 28 02:00:06.065000 audit: BPF prog-id=24 op=UNLOAD Jan 28 02:00:06.065000 audit: BPF prog-id=39 op=LOAD Jan 28 02:00:06.065000 audit: BPF prog-id=21 op=UNLOAD Jan 28 02:00:06.071000 audit: BPF prog-id=40 op=LOAD Jan 28 02:00:06.071000 audit: BPF prog-id=25 op=UNLOAD Jan 28 02:00:06.071000 audit: BPF prog-id=41 op=LOAD Jan 28 02:00:06.071000 audit: BPF prog-id=42 op=LOAD Jan 28 02:00:06.071000 audit: BPF prog-id=26 op=UNLOAD Jan 28 02:00:06.071000 audit: BPF prog-id=27 op=UNLOAD Jan 28 02:00:06.117139 systemd[1]: Reload requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Jan 28 02:00:06.117326 systemd[1]: Reloading... Jan 28 02:00:06.137210 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 02:00:06.137500 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 02:00:06.138032 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 02:00:06.150543 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 28 02:00:06.150704 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 28 02:00:06.185105 systemd-udevd[1371]: Using default interface naming scheme 'v257'. Jan 28 02:00:06.194256 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:00:06.194318 systemd-tmpfiles[1370]: Skipping /boot Jan 28 02:00:06.279257 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:00:06.279323 systemd-tmpfiles[1370]: Skipping /boot Jan 28 02:00:06.406544 zram_generator::config[1406]: No configuration found. Jan 28 02:00:06.696967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 02:00:06.727994 kernel: ACPI: button: Power Button [PWRF] Jan 28 02:00:06.740036 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 02:00:06.944459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:00:06.953699 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 02:00:06.954646 systemd[1]: Reloading finished in 836 ms. Jan 28 02:00:06.976820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:00:06.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:06.991944 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 02:00:06.992349 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 02:00:06.986000 audit: BPF prog-id=43 op=LOAD Jan 28 02:00:06.986000 audit: BPF prog-id=44 op=LOAD Jan 28 02:00:06.990000 audit: BPF prog-id=28 op=UNLOAD Jan 28 02:00:06.990000 audit: BPF prog-id=29 op=UNLOAD Jan 28 02:00:06.991000 audit: BPF prog-id=45 op=LOAD Jan 28 02:00:06.991000 audit: BPF prog-id=33 op=UNLOAD Jan 28 02:00:06.991000 audit: BPF prog-id=46 op=LOAD Jan 28 02:00:06.991000 audit: BPF prog-id=47 op=LOAD Jan 28 02:00:06.991000 audit: BPF prog-id=34 op=UNLOAD Jan 28 02:00:06.991000 audit: BPF prog-id=35 op=UNLOAD Jan 28 02:00:06.991000 audit: BPF prog-id=48 op=LOAD Jan 28 02:00:06.991000 audit: BPF prog-id=39 op=UNLOAD Jan 28 02:00:06.996000 audit: BPF prog-id=49 op=LOAD Jan 28 02:00:06.996000 audit: BPF prog-id=30 op=UNLOAD Jan 28 02:00:06.996000 audit: BPF prog-id=50 op=LOAD Jan 28 02:00:06.996000 audit: BPF prog-id=51 op=LOAD Jan 28 02:00:06.996000 audit: BPF prog-id=31 op=UNLOAD Jan 28 02:00:06.996000 audit: BPF prog-id=32 op=UNLOAD Jan 28 02:00:07.000000 audit: BPF prog-id=52 op=LOAD Jan 28 02:00:07.000000 audit: BPF prog-id=36 op=UNLOAD Jan 28 02:00:07.000000 audit: BPF prog-id=53 op=LOAD Jan 28 02:00:07.000000 audit: BPF prog-id=54 op=LOAD Jan 28 02:00:07.000000 audit: BPF prog-id=37 op=UNLOAD Jan 28 02:00:07.000000 audit: BPF prog-id=38 op=UNLOAD Jan 28 02:00:07.009000 audit: BPF prog-id=55 op=LOAD Jan 28 02:00:07.009000 audit: BPF prog-id=40 op=UNLOAD Jan 28 02:00:07.009000 audit: BPF prog-id=56 op=LOAD Jan 28 02:00:07.009000 audit: BPF prog-id=57 op=LOAD Jan 28 02:00:07.009000 audit: BPF prog-id=41 op=UNLOAD Jan 28 02:00:07.009000 audit: BPF prog-id=42 op=UNLOAD Jan 28 02:00:07.015752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:00:07.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.092022 systemd[1]: Finished ensure-sysext.service. Jan 28 02:00:07.138323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:00:07.141381 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 02:00:07.150058 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 02:00:07.160163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:00:07.305918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:00:07.350000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:00:07.456196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:00:07.490596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:00:07.502488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:00:07.504360 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 02:00:07.522052 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 02:00:07.591000 audit: BPF prog-id=58 op=LOAD Jan 28 02:00:07.544707 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 02:00:07.550002 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 02:00:07.551684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 02:00:07.608211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:00:07.622000 audit: BPF prog-id=59 op=LOAD Jan 28 02:00:07.623338 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 02:00:07.649998 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 02:00:07.687971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:00:07.703104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:00:07.705593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:00:07.706022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:00:07.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.717754 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:00:07.718162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:00:07.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.731469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:00:07.732017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:00:07.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.743829 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:00:07.744725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:00:07.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.765725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 02:00:07.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.777469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 02:00:07.791000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:07.799000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 02:00:07.799000 audit[1517]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe77109f20 a2=420 a3=0 items=0 ppid=1484 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:07.799000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 02:00:07.801799 augenrules[1517]: No rules Jan 28 02:00:07.805535 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:00:07.806243 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 02:00:07.828581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:00:07.832106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:00:07.846121 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 02:00:07.914272 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 02:00:07.914807 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 02:00:08.109386 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 02:00:08.146011 systemd-networkd[1507]: lo: Link UP Jan 28 02:00:08.146024 systemd-networkd[1507]: lo: Gained carrier Jan 28 02:00:08.151292 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 02:00:08.151300 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:00:08.154606 systemd-networkd[1507]: eth0: Link UP Jan 28 02:00:08.156401 systemd-networkd[1507]: eth0: Gained carrier Jan 28 02:00:08.156478 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 02:00:08.222124 systemd-networkd[1507]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 02:00:08.225013 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jan 28 02:00:08.229501 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 02:00:08.230159 systemd-timesyncd[1510]: Initial clock synchronization to Wed 2026-01-28 02:00:08.518579 UTC. Jan 28 02:00:08.603301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:00:08.650634 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:00:08.679733 systemd[1]: Reached target network.target - Network. Jan 28 02:00:08.692485 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 02:00:08.713331 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 02:00:08.740130 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 02:00:08.894693 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 02:00:09.771317 ldconfig[1499]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 02:00:09.809106 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 02:00:09.839315 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 02:00:09.926087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 02:00:09.967995 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:00:09.978436 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 02:00:09.993426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 02:00:10.022341 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 02:00:10.053442 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 02:00:10.070835 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 02:00:10.096441 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 02:00:10.122589 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 02:00:10.129960 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 02:00:10.165049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 02:00:10.165102 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:00:10.171295 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:00:10.185555 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 02:00:10.264137 systemd-networkd[1507]: eth0: Gained IPv6LL Jan 28 02:00:10.274668 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 02:00:10.285612 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 02:00:10.295746 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 02:00:10.306983 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 02:00:10.376739 kernel: kvm_amd: TSC scaling supported Jan 28 02:00:10.376819 kernel: kvm_amd: Nested Virtualization enabled Jan 28 02:00:10.376844 kernel: kvm_amd: Nested Paging enabled Jan 28 02:00:10.377361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 02:00:10.389514 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 02:00:10.389625 kernel: kvm_amd: PMU virtualization is disabled Jan 28 02:00:10.402630 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 02:00:10.433844 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 02:00:10.449527 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 02:00:10.468045 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 02:00:10.479409 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:00:10.497116 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:00:10.512833 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:00:10.512953 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:00:10.520215 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 02:00:10.561118 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 02:00:10.591137 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 02:00:10.620442 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 02:00:10.644271 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 02:00:10.665589 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 02:00:10.674072 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 02:00:10.681447 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 02:00:10.692243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:00:10.701138 jq[1554]: false Jan 28 02:00:10.725220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 02:00:10.752719 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 02:00:10.785164 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 02:00:10.804520 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jan 28 02:00:10.800753 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jan 28 02:00:10.818569 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 02:00:10.837718 extend-filesystems[1555]: Found /dev/vda6 Jan 28 02:00:10.883465 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 02:00:10.886617 oslogin_cache_refresh[1556]: Failure getting users, quitting Jan 28 02:00:10.890498 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Jan 28 02:00:10.890498 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 02:00:10.890498 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Jan 28 02:00:10.886643 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 02:00:10.886791 oslogin_cache_refresh[1556]: Refreshing group entry cache Jan 28 02:00:10.896767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 02:00:10.898534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 02:00:10.914490 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Jan 28 02:00:10.914490 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 02:00:10.912271 oslogin_cache_refresh[1556]: Failure getting groups, quitting Jan 28 02:00:10.912295 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 02:00:10.917754 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 02:00:10.918363 extend-filesystems[1555]: Found /dev/vda9 Jan 28 02:00:10.945193 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 02:00:10.949855 extend-filesystems[1555]: Checking size of /dev/vda9 Jan 28 02:00:10.998444 extend-filesystems[1555]: Resized partition /dev/vda9 Jan 28 02:00:11.052668 extend-filesystems[1591]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 02:00:11.154210 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 28 02:00:11.020465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 02:00:11.154505 jq[1580]: true Jan 28 02:00:11.056536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 02:00:11.058044 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 02:00:11.157205 update_engine[1573]: I20260128 02:00:11.081696 1573 main.cc:92] Flatcar Update Engine starting Jan 28 02:00:11.059045 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 02:00:11.059511 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 02:00:11.102138 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 02:00:11.102642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 02:00:11.121746 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 02:00:11.175650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 02:00:11.226442 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 02:00:11.251295 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 28 02:00:11.316812 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 02:00:11.318983 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 02:00:11.350430 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 02:00:11.350430 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 02:00:11.350430 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 28 02:00:11.386305 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Jan 28 02:00:11.367037 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 02:00:11.407273 jq[1600]: true Jan 28 02:00:11.375683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 02:00:11.421445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 02:00:11.492332 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 02:00:11.492374 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 02:00:11.493964 systemd-logind[1569]: New seat seat0. Jan 28 02:00:11.495736 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 02:00:11.736178 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:00:11.767051 dbus-daemon[1552]: [system] SELinux support is enabled Jan 28 02:00:11.788124 update_engine[1573]: I20260128 02:00:11.788060 1573 update_check_scheduler.cc:74] Next update check in 7m15s Jan 28 02:00:11.788140 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 02:00:11.804053 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 02:00:11.834662 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 02:00:11.844819 dbus-daemon[1552]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 02:00:11.835648 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 02:00:11.836005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 02:00:11.855115 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 02:00:11.855153 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 02:00:11.880716 systemd[1]: Started update-engine.service - Update Engine. Jan 28 02:00:11.912999 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 02:00:12.258823 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 02:00:12.260572 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 02:00:12.357850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 02:00:12.381552 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 02:00:12.457735 containerd[1601]: time="2026-01-28T02:00:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 02:00:12.463506 containerd[1601]: time="2026-01-28T02:00:12.462700499Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 02:00:12.503104 containerd[1601]: time="2026-01-28T02:00:12.501668952Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.756µs" Jan 28 02:00:12.505153 containerd[1601]: time="2026-01-28T02:00:12.503492967Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 02:00:12.505153 containerd[1601]: time="2026-01-28T02:00:12.504265567Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 02:00:12.510958 containerd[1601]: time="2026-01-28T02:00:12.509997016Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 02:00:12.512269 containerd[1601]: time="2026-01-28T02:00:12.512159083Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 02:00:12.512269 containerd[1601]: time="2026-01-28T02:00:12.512234982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 02:00:12.514126 containerd[1601]: time="2026-01-28T02:00:12.512335253Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 02:00:12.514317 containerd[1601]: time="2026-01-28T02:00:12.514225170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523146 containerd[1601]: time="2026-01-28T02:00:12.521221143Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523237 containerd[1601]: time="2026-01-28T02:00:12.523148107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523237 containerd[1601]: time="2026-01-28T02:00:12.523178267Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523237 containerd[1601]: time="2026-01-28T02:00:12.523192172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523742 containerd[1601]: time="2026-01-28T02:00:12.523476237Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.523742 containerd[1601]: time="2026-01-28T02:00:12.523501071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 02:00:12.533096 containerd[1601]: time="2026-01-28T02:00:12.526665683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.533096 containerd[1601]: time="2026-01-28T02:00:12.529428177Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.533096 containerd[1601]: time="2026-01-28T02:00:12.529482175Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 02:00:12.533096 containerd[1601]: time="2026-01-28T02:00:12.529499456Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 02:00:12.533096 containerd[1601]: time="2026-01-28T02:00:12.529724915Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 02:00:12.538417 containerd[1601]: time="2026-01-28T02:00:12.534018838Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 02:00:12.538417 containerd[1601]: time="2026-01-28T02:00:12.534311914Z" level=info msg="metadata content store policy set" policy=shared Jan 28 02:00:12.560413 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 02:00:12.567516 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 02:00:12.588605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 02:00:12.601780 containerd[1601]: time="2026-01-28T02:00:12.601739727Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 02:00:12.602133 containerd[1601]: time="2026-01-28T02:00:12.602092383Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 02:00:12.604360 containerd[1601]: time="2026-01-28T02:00:12.604141384Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 02:00:12.604360 containerd[1601]: time="2026-01-28T02:00:12.604269444Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 02:00:12.604501 containerd[1601]: time="2026-01-28T02:00:12.604481869Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.607911633Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.607942809Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608712517Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608738788Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608759425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608780750Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608798616Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608814522Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.608836483Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.609146922Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.609178293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.609200777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.609217740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 02:00:12.613363 containerd[1601]: time="2026-01-28T02:00:12.609242575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609262545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609280760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609296605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609314994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609337180Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609355693Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609390185Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609456836Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 02:00:12.616005 containerd[1601]: time="2026-01-28T02:00:12.609475175Z" level=info msg="Start snapshots syncer" Jan 28 02:00:12.622482 containerd[1601]: time="2026-01-28T02:00:12.618612538Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 02:00:12.622736 containerd[1601]: time="2026-01-28T02:00:12.622679496Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 02:00:12.626689 containerd[1601]: time="2026-01-28T02:00:12.626614484Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 02:00:12.627208 containerd[1601]: time="2026-01-28T02:00:12.627037692Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 02:00:12.627536 containerd[1601]: time="2026-01-28T02:00:12.627504946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 02:00:12.627645 containerd[1601]: time="2026-01-28T02:00:12.627628572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 02:00:12.627712 containerd[1601]: time="2026-01-28T02:00:12.627697205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 02:00:12.627773 containerd[1601]: time="2026-01-28T02:00:12.627760389Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 02:00:12.627827 containerd[1601]: time="2026-01-28T02:00:12.627814861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 02:00:12.628002 containerd[1601]: time="2026-01-28T02:00:12.627984031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 02:00:12.632420 containerd[1601]: time="2026-01-28T02:00:12.632390685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 02:00:12.632511 containerd[1601]: time="2026-01-28T02:00:12.632490452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 02:00:12.632591 containerd[1601]: time="2026-01-28T02:00:12.632571226Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 02:00:12.633057 containerd[1601]: time="2026-01-28T02:00:12.633034096Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 02:00:12.633151 containerd[1601]: time="2026-01-28T02:00:12.633128949Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 02:00:12.637962 containerd[1601]: time="2026-01-28T02:00:12.633202600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 02:00:12.637962 containerd[1601]: time="2026-01-28T02:00:12.637808019Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 02:00:12.637962 containerd[1601]: time="2026-01-28T02:00:12.637827918Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 02:00:12.638159 containerd[1601]: time="2026-01-28T02:00:12.638135566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 02:00:12.638298 containerd[1601]: time="2026-01-28T02:00:12.638218647Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 02:00:12.648393 containerd[1601]: time="2026-01-28T02:00:12.642232745Z" level=info msg="runtime interface created" Jan 28 02:00:12.648393 containerd[1601]: time="2026-01-28T02:00:12.642250591Z" level=info msg="created NRI interface" Jan 28 02:00:12.648393 containerd[1601]: time="2026-01-28T02:00:12.642266404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 02:00:12.648393 containerd[1601]: time="2026-01-28T02:00:12.642292100Z" level=info msg="Connect containerd service" Jan 28 02:00:12.648393 containerd[1601]: time="2026-01-28T02:00:12.642383279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 02:00:12.649388 containerd[1601]: time="2026-01-28T02:00:12.649357517Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 02:00:12.695482 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 02:00:12.735477 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 02:00:12.789220 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 02:00:12.817746 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253536144Z" level=info msg="Start subscribing containerd event" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253618837Z" level=info msg="Start recovering state" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253750018Z" level=info msg="Start event monitor" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253764495Z" level=info msg="Start cni network conf syncer for default" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253774379Z" level=info msg="Start streaming server" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253784427Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253793706Z" level=info msg="runtime interface starting up..." Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253801656Z" level=info msg="starting plugins..." Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.253822129Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.257722441Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 02:00:13.257954 containerd[1601]: time="2026-01-28T02:00:13.257824409Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 02:00:13.262147 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 02:00:13.270184 containerd[1601]: time="2026-01-28T02:00:13.270132846Z" level=info msg="containerd successfully booted in 0.820403s" Jan 28 02:00:14.127244 kernel: EDAC MC: Ver: 3.0.0 Jan 28 02:00:15.142083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:00:15.173210 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 02:00:15.196659 systemd[1]: Startup finished in 26.332s (kernel) + 26.838s (initrd) + 15.334s (userspace) = 1min 8.504s. Jan 28 02:00:15.198819 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:00:17.479133 kubelet[1684]: E0128 02:00:17.477198 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:00:17.495319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:00:17.495550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:00:17.496108 systemd[1]: kubelet.service: Consumed 1.650s CPU time, 267.7M memory peak. Jan 28 02:00:19.251963 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 02:00:19.258433 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Jan 28 02:00:19.864598 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:19.874762 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:19.927425 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 02:00:19.945506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 02:00:20.009573 systemd-logind[1569]: New session 1 of user core. Jan 28 02:00:20.096412 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 02:00:20.106288 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 02:00:20.183697 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:20.205174 systemd-logind[1569]: New session 2 of user core. Jan 28 02:00:20.573136 systemd[1705]: Queued start job for default target default.target. Jan 28 02:00:20.593272 systemd[1705]: Created slice app.slice - User Application Slice. Jan 28 02:00:20.593372 systemd[1705]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 02:00:20.593393 systemd[1705]: Reached target paths.target - Paths. Jan 28 02:00:20.593528 systemd[1705]: Reached target timers.target - Timers. Jan 28 02:00:20.603249 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 02:00:20.607601 systemd[1705]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 02:00:20.630435 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 02:00:20.631642 systemd[1705]: Reached target sockets.target - Sockets. Jan 28 02:00:20.652413 systemd[1705]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 02:00:20.652677 systemd[1705]: Reached target basic.target - Basic System. Jan 28 02:00:20.657716 systemd[1705]: Reached target default.target - Main User Target. Jan 28 02:00:20.657813 systemd[1705]: Startup finished in 433ms. Jan 28 02:00:20.658354 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 02:00:20.672694 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 02:00:20.754157 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:53894.service - OpenSSH per-connection server daemon (10.0.0.1:53894). Jan 28 02:00:21.102663 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 53894 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:21.131363 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:21.207782 systemd-logind[1569]: New session 3 of user core. Jan 28 02:00:21.245433 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 02:00:21.355959 sshd[1723]: Connection closed by 10.0.0.1 port 53894 Jan 28 02:00:21.360418 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:21.414659 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:53894.service: Deactivated successfully. Jan 28 02:00:21.423709 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 02:00:21.445168 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 28 02:00:21.500252 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:53908.service - OpenSSH per-connection server daemon (10.0.0.1:53908). Jan 28 02:00:21.510497 systemd-logind[1569]: Removed session 3. Jan 28 02:00:21.826207 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 53908 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:21.831499 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:21.878508 systemd-logind[1569]: New session 4 of user core. Jan 28 02:00:21.924550 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 02:00:22.030065 sshd[1733]: Connection closed by 10.0.0.1 port 53908 Jan 28 02:00:22.029175 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:22.081194 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:53908.service: Deactivated successfully. Jan 28 02:00:22.115734 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 02:00:22.135776 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 28 02:00:22.142593 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:53924.service - OpenSSH per-connection server daemon (10.0.0.1:53924). Jan 28 02:00:22.147584 systemd-logind[1569]: Removed session 4. Jan 28 02:00:22.580486 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 53924 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:22.620582 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:22.680443 systemd-logind[1569]: New session 5 of user core. Jan 28 02:00:22.738009 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 02:00:22.953246 sshd[1744]: Connection closed by 10.0.0.1 port 53924 Jan 28 02:00:22.957406 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:22.985304 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:53924.service: Deactivated successfully. Jan 28 02:00:22.991307 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 02:00:23.005598 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 28 02:00:23.044777 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Jan 28 02:00:23.049389 systemd-logind[1569]: Removed session 5. Jan 28 02:00:23.304628 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:23.339314 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:23.399561 systemd-logind[1569]: New session 6 of user core. Jan 28 02:00:23.435362 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 02:00:23.692389 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 02:00:23.693244 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:00:23.761700 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 28 02:00:23.777466 sshd[1754]: Connection closed by 10.0.0.1 port 55134 Jan 28 02:00:23.772391 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:23.833586 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:55134.service: Deactivated successfully. Jan 28 02:00:23.846253 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 02:00:23.859020 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 28 02:00:23.876287 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:55148.service - OpenSSH per-connection server daemon (10.0.0.1:55148). Jan 28 02:00:23.896324 systemd-logind[1569]: Removed session 6. Jan 28 02:00:24.188042 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 55148 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:24.187558 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:24.275229 systemd-logind[1569]: New session 7 of user core. Jan 28 02:00:24.302715 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 02:00:24.461955 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 02:00:24.465305 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:00:24.486241 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 28 02:00:24.547526 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 02:00:24.550418 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:00:24.589096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 02:00:24.985000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 02:00:24.992374 augenrules[1792]: No rules Jan 28 02:00:25.000336 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:00:25.000791 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 02:00:25.056513 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 28 02:00:25.056711 kernel: audit: type=1305 audit(1769565624.985:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 02:00:25.075753 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 28 02:00:25.091773 sshd[1766]: Connection closed by 10.0.0.1 port 55148 Jan 28 02:00:25.092592 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:24.985000 audit[1792]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefc26d750 a2=420 a3=0 items=0 ppid=1773 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:25.256952 kernel: audit: type=1300 audit(1769565624.985:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefc26d750 a2=420 a3=0 items=0 ppid=1773 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:25.257590 kernel: audit: type=1327 audit(1769565624.985:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 02:00:24.985000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 02:00:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.284940 kernel: audit: type=1130 audit(1769565625.055:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.305243 kernel: audit: type=1131 audit(1769565625.055:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.074000 audit[1767]: USER_END pid=1767 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.391394 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:55148.service: Deactivated successfully. Jan 28 02:00:25.394135 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 02:00:25.419571 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 28 02:00:25.447356 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). Jan 28 02:00:25.462815 systemd-logind[1569]: Removed session 7. Jan 28 02:00:25.473270 kernel: audit: type=1106 audit(1769565625.074:230): pid=1767 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.473368 kernel: audit: type=1104 audit(1769565625.074:231): pid=1767 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.074000 audit[1767]: CRED_DISP pid=1767 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.164000 audit[1762]: USER_END pid=1762 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.164000 audit[1762]: CRED_DISP pid=1762 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.580722 kernel: audit: type=1106 audit(1769565625.164:232): pid=1762 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.580784 kernel: audit: type=1104 audit(1769565625.164:233): pid=1762 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.114:22-10.0.0.1:55148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.114:22-10.0.0.1:55160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.690023 kernel: audit: type=1131 audit(1769565625.389:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.114:22-10.0.0.1:55148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:25.887000 audit[1801]: USER_ACCT pid=1801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.892607 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:fIgsaUvZW/rcdqAk7xTpFeaarOMccv8nzMqKWJhJMqA Jan 28 02:00:25.901000 audit[1801]: CRED_ACQ pid=1801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:25.904000 audit[1801]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdba3b160 a2=3 a3=0 items=0 ppid=1 pid=1801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:25.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 02:00:25.950008 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:26.000653 systemd-logind[1569]: New session 8 of user core. Jan 28 02:00:26.052679 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 02:00:26.080000 audit[1801]: USER_START pid=1801 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:26.094000 audit[1805]: CRED_ACQ pid=1805 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:00:26.276000 audit[1806]: USER_ACCT pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:26.287172 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 02:00:26.287978 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:00:26.284000 audit[1806]: CRED_REFR pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:26.284000 audit[1806]: USER_START pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:00:26.396815 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 02:00:26.778290 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 02:00:26.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:26.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:26.784603 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 02:00:27.687012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 02:00:27.716404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:00:29.472786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:00:29.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:29.575592 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:00:30.287396 kubelet[1837]: E0128 02:00:30.287139 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:00:30.311431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:00:30.316056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:00:30.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 02:00:30.334659 systemd[1]: kubelet.service: Consumed 611ms CPU time, 108.9M memory peak. Jan 28 02:00:30.373741 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 28 02:00:30.373994 kernel: audit: type=1131 audit(1769565630.334:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 02:00:36.353182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:00:36.353663 systemd[1]: kubelet.service: Consumed 611ms CPU time, 108.9M memory peak. Jan 28 02:00:36.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:36.385520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:00:36.402383 kernel: audit: type=1130 audit(1769565636.352:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:36.402518 kernel: audit: type=1131 audit(1769565636.352:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:36.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:36.614553 systemd[1]: Reload requested from client PID 1866 ('systemctl') (unit session-8.scope)... Jan 28 02:00:36.614628 systemd[1]: Reloading... Jan 28 02:00:36.949704 zram_generator::config[1907]: No configuration found. Jan 28 02:00:37.744727 systemd[1]: Reloading finished in 1129 ms. Jan 28 02:00:37.910000 audit: BPF prog-id=63 op=LOAD Jan 28 02:00:37.912000 audit: BPF prog-id=64 op=LOAD Jan 28 02:00:37.933666 kernel: audit: type=1334 audit(1769565637.910:250): prog-id=63 op=LOAD Jan 28 02:00:37.924000 audit: BPF prog-id=43 op=UNLOAD Jan 28 02:00:37.976702 kernel: audit: type=1334 audit(1769565637.912:251): prog-id=64 op=LOAD Jan 28 02:00:37.976821 kernel: audit: type=1334 audit(1769565637.924:252): prog-id=43 op=UNLOAD Jan 28 02:00:37.924000 audit: BPF prog-id=44 op=UNLOAD Jan 28 02:00:37.988966 kernel: audit: type=1334 audit(1769565637.924:253): prog-id=44 op=UNLOAD Jan 28 02:00:37.989060 kernel: audit: type=1334 audit(1769565637.924:254): prog-id=65 op=LOAD Jan 28 02:00:37.924000 audit: BPF prog-id=65 op=LOAD Jan 28 02:00:38.002947 kernel: audit: type=1334 audit(1769565637.924:255): prog-id=49 op=UNLOAD Jan 28 02:00:38.003084 kernel: audit: type=1334 audit(1769565637.924:256): prog-id=66 op=LOAD Jan 28 02:00:37.924000 audit: BPF prog-id=49 op=UNLOAD Jan 28 02:00:37.924000 audit: BPF prog-id=66 op=LOAD Jan 28 02:00:38.018821 kernel: audit: type=1334 audit(1769565637.924:257): prog-id=67 op=LOAD Jan 28 02:00:37.924000 audit: BPF prog-id=67 op=LOAD Jan 28 02:00:37.924000 audit: BPF prog-id=50 op=UNLOAD Jan 28 02:00:37.924000 audit: BPF prog-id=51 op=UNLOAD Jan 28 02:00:37.931000 audit: BPF prog-id=68 op=LOAD Jan 28 02:00:37.931000 audit: BPF prog-id=55 op=UNLOAD Jan 28 02:00:37.945000 audit: BPF prog-id=69 op=LOAD Jan 28 02:00:37.945000 audit: BPF prog-id=70 op=LOAD Jan 28 02:00:37.945000 audit: BPF prog-id=56 op=UNLOAD Jan 28 02:00:37.945000 audit: BPF prog-id=57 op=UNLOAD Jan 28 02:00:37.945000 audit: BPF prog-id=71 op=LOAD Jan 28 02:00:37.945000 audit: BPF prog-id=48 op=UNLOAD Jan 28 02:00:37.953000 audit: BPF prog-id=72 op=LOAD Jan 28 02:00:37.953000 audit: BPF prog-id=60 op=UNLOAD Jan 28 02:00:37.954000 audit: BPF prog-id=73 op=LOAD Jan 28 02:00:37.954000 audit: BPF prog-id=74 op=LOAD Jan 28 02:00:37.954000 audit: BPF prog-id=61 op=UNLOAD Jan 28 02:00:37.954000 audit: BPF prog-id=62 op=UNLOAD Jan 28 02:00:37.971000 audit: BPF prog-id=75 op=LOAD Jan 28 02:00:37.971000 audit: BPF prog-id=45 op=UNLOAD Jan 28 02:00:37.971000 audit: BPF prog-id=76 op=LOAD Jan 28 02:00:37.971000 audit: BPF prog-id=77 op=LOAD Jan 28 02:00:37.971000 audit: BPF prog-id=46 op=UNLOAD Jan 28 02:00:37.971000 audit: BPF prog-id=47 op=UNLOAD Jan 28 02:00:38.045000 audit: BPF prog-id=78 op=LOAD Jan 28 02:00:38.045000 audit: BPF prog-id=52 op=UNLOAD Jan 28 02:00:38.045000 audit: BPF prog-id=79 op=LOAD Jan 28 02:00:38.045000 audit: BPF prog-id=80 op=LOAD Jan 28 02:00:38.045000 audit: BPF prog-id=53 op=UNLOAD Jan 28 02:00:38.045000 audit: BPF prog-id=54 op=UNLOAD Jan 28 02:00:38.049000 audit: BPF prog-id=81 op=LOAD Jan 28 02:00:38.059000 audit: BPF prog-id=59 op=UNLOAD Jan 28 02:00:38.062000 audit: BPF prog-id=82 op=LOAD Jan 28 02:00:38.062000 audit: BPF prog-id=58 op=UNLOAD Jan 28 02:00:38.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:38.154993 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:00:38.177211 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:00:38.177780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:00:38.178009 systemd[1]: kubelet.service: Consumed 282ms CPU time, 98.5M memory peak. Jan 28 02:00:38.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:38.201329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:00:38.931517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:00:38.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:00:38.970119 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:00:39.160943 kubelet[1960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:00:39.160943 kubelet[1960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:00:39.160943 kubelet[1960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:00:39.162735 kubelet[1960]: I0128 02:00:39.161042 1960 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:00:40.105296 kubelet[1960]: I0128 02:00:40.105161 1960 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 02:00:40.105296 kubelet[1960]: I0128 02:00:40.105201 1960 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:00:40.106100 kubelet[1960]: I0128 02:00:40.105533 1960 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 02:00:40.226436 kubelet[1960]: I0128 02:00:40.221256 1960 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:00:40.282034 kubelet[1960]: I0128 02:00:40.279154 1960 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 02:00:40.305411 kubelet[1960]: I0128 02:00:40.303733 1960 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 02:00:40.316030 kubelet[1960]: I0128 02:00:40.312275 1960 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:00:40.316030 kubelet[1960]: I0128 02:00:40.315023 1960 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.114","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 02:00:40.316030 kubelet[1960]: I0128 02:00:40.315261 1960 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:00:40.316030 kubelet[1960]: I0128 02:00:40.315278 1960 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 02:00:40.316425 kubelet[1960]: I0128 02:00:40.315460 1960 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:00:40.340784 kubelet[1960]: I0128 02:00:40.338592 1960 kubelet.go:446] "Attempting to sync node with API server" Jan 28 02:00:40.340784 kubelet[1960]: I0128 02:00:40.338687 1960 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:00:40.340784 kubelet[1960]: I0128 02:00:40.338728 1960 kubelet.go:352] "Adding apiserver pod source" Jan 28 02:00:40.340784 kubelet[1960]: I0128 02:00:40.338743 1960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:00:40.344049 kubelet[1960]: E0128 02:00:40.341501 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:40.344049 kubelet[1960]: E0128 02:00:40.341594 1960 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:40.352940 kubelet[1960]: I0128 02:00:40.351090 1960 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 02:00:40.352940 kubelet[1960]: I0128 02:00:40.351583 1960 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 02:00:40.352940 kubelet[1960]: W0128 02:00:40.351645 1960 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 02:00:40.377614 kubelet[1960]: I0128 02:00:40.369827 1960 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 02:00:40.377614 kubelet[1960]: I0128 02:00:40.370028 1960 server.go:1287] "Started kubelet" Jan 28 02:00:40.377614 kubelet[1960]: W0128 02:00:40.370543 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 28 02:00:40.377614 kubelet[1960]: E0128 02:00:40.370591 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:40.377614 kubelet[1960]: I0128 02:00:40.370643 1960 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:00:40.377939 kubelet[1960]: I0128 02:00:40.377633 1960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:00:40.381415 kubelet[1960]: I0128 02:00:40.378605 1960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:00:40.381415 kubelet[1960]: I0128 02:00:40.380822 1960 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:00:40.388954 kubelet[1960]: I0128 02:00:40.384426 1960 server.go:479] "Adding debug handlers to kubelet server" Jan 28 02:00:40.388954 kubelet[1960]: I0128 02:00:40.384811 1960 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:00:40.388954 kubelet[1960]: E0128 02:00:40.388205 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:00:40.388954 kubelet[1960]: I0128 02:00:40.388234 1960 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 02:00:40.388954 kubelet[1960]: I0128 02:00:40.388311 1960 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 02:00:40.388954 kubelet[1960]: I0128 02:00:40.388443 1960 reconciler.go:26] "Reconciler: start to sync state" Jan 28 02:00:40.407114 kubelet[1960]: I0128 02:00:40.401268 1960 factory.go:221] Registration of the systemd container factory successfully Jan 28 02:00:40.407114 kubelet[1960]: I0128 02:00:40.401412 1960 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:00:40.421708 kubelet[1960]: E0128 02:00:40.421448 1960 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:00:40.423511 kubelet[1960]: I0128 02:00:40.423395 1960 factory.go:221] Registration of the containerd container factory successfully Jan 28 02:00:40.461440 kubelet[1960]: E0128 02:00:40.457716 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 28 02:00:40.461440 kubelet[1960]: W0128 02:00:40.458088 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 28 02:00:40.461440 kubelet[1960]: E0128 02:00:40.458127 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:40.466102 kubelet[1960]: E0128 02:00:40.458292 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29308fd977b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.370001787 +0000 UTC m=+1.381713503,LastTimestamp:2026-01-28 02:00:40.370001787 +0000 UTC m=+1.381713503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.466102 kubelet[1960]: W0128 02:00:40.465746 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 28 02:00:40.466102 kubelet[1960]: E0128 02:00:40.465778 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:40.499696 kubelet[1960]: E0128 02:00:40.499164 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:00:40.504014 kubelet[1960]: E0128 02:00:40.503449 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec2930c0b7c9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.421244058 +0000 UTC m=+1.432955824,LastTimestamp:2026-01-28 02:00:40.421244058 +0000 UTC m=+1.432955824,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.552227 kubelet[1960]: I0128 02:00:40.548502 1960 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:00:40.552227 kubelet[1960]: I0128 02:00:40.548568 1960 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:00:40.552227 kubelet[1960]: I0128 02:00:40.548598 1960 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:00:40.568689 kubelet[1960]: E0128 02:00:40.568336 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.589583 kubelet[1960]: E0128 02:00:40.589466 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.599673 kubelet[1960]: E0128 02:00:40.599337 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:00:40.613087 kubelet[1960]: E0128 02:00:40.611429 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.617000 audit[1978]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.617000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd38b08530 a2=0 a3=0 items=0 ppid=1960 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 02:00:40.637000 audit[1981]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.637000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd6e37b9d0 a2=0 a3=0 items=0 ppid=1960 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 02:00:40.652695 kubelet[1960]: I0128 02:00:40.649736 1960 policy_none.go:49] "None policy: Start" Jan 28 02:00:40.652695 kubelet[1960]: I0128 02:00:40.649758 1960 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 02:00:40.652695 kubelet[1960]: I0128 02:00:40.649773 1960 state_mem.go:35] "Initializing new in-memory state store" Jan 28 02:00:40.646000 audit[1984]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.646000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe0ffbcc00 a2=0 a3=0 items=0 ppid=1960 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 02:00:40.672000 audit[1986]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.672000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc20f84740 a2=0 a3=0 items=0 ppid=1960 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 02:00:40.682331 kubelet[1960]: E0128 02:00:40.681650 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jan 28 02:00:40.701925 kubelet[1960]: E0128 02:00:40.699660 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:00:40.730440 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 02:00:40.763014 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 02:00:40.784293 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 02:00:40.790056 kubelet[1960]: I0128 02:00:40.789433 1960 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 02:00:40.790056 kubelet[1960]: I0128 02:00:40.789750 1960 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:00:40.790056 kubelet[1960]: I0128 02:00:40.789763 1960 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:00:40.793123 kubelet[1960]: I0128 02:00:40.793013 1960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:00:40.799660 kubelet[1960]: E0128 02:00:40.799549 1960 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:00:40.799791 kubelet[1960]: E0128 02:00:40.799774 1960 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 28 02:00:40.816695 kubelet[1960]: E0128 02:00:40.816060 1960 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29322a3b226 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.80031799 +0000 UTC m=+1.812029696,LastTimestamp:2026-01-28 02:00:40.80031799 +0000 UTC m=+1.812029696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.894185 kubelet[1960]: I0128 02:00:40.893968 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:40.882000 audit[1992]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.882000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffa9adc060 a2=0 a3=0 items=0 ppid=1960 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 02:00:40.897150 kubelet[1960]: I0128 02:00:40.896481 1960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 02:00:40.911000 audit[1993]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:00:40.911000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd7ff6ffd0 a2=0 a3=0 items=0 ppid=1960 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 02:00:40.911000 audit[1994]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.911000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdf7c4260 a2=0 a3=0 items=0 ppid=1960 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 02:00:40.929000 audit[1996]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:00:40.929000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd265b640 a2=0 a3=0 items=0 ppid=1960 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 02:00:40.933000 audit[1995]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.933000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcd0c08680 a2=0 a3=0 items=0 ppid=1960 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.936999 kubelet[1960]: I0128 02:00:40.914444 1960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 02:00:40.936999 kubelet[1960]: I0128 02:00:40.914479 1960 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 02:00:40.936999 kubelet[1960]: I0128 02:00:40.914505 1960 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:00:40.936999 kubelet[1960]: I0128 02:00:40.914517 1960 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 02:00:40.936999 kubelet[1960]: E0128 02:00:40.914944 1960 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 28 02:00:40.936999 kubelet[1960]: W0128 02:00:40.929215 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 28 02:00:40.936999 kubelet[1960]: E0128 02:00:40.929214 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:40.89383131 +0000 UTC m=+1.905543016,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 02:00:40.937485 kubelet[1960]: E0128 02:00:40.929312 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:40.937729 kubelet[1960]: E0128 02:00:40.937506 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:40.893926781 +0000 UTC m=+1.905638487,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:40.944000 audit[1997]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:00:40.944000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffda9849630 a2=0 a3=0 items=0 ppid=1960 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 02:00:40.949000 audit[1998]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:00:40.949000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd952f17e0 a2=0 a3=0 items=0 ppid=1960 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 02:00:40.957294 kubelet[1960]: E0128 02:00:40.957249 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:40.956000 audit[1999]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:00:40.956000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc97de5bf0 a2=0 a3=0 items=0 ppid=1960 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:00:40.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 02:00:40.993271 kubelet[1960]: E0128 02:00:40.991824 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:40.893931783 +0000 UTC m=+1.905643490,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.149368 kubelet[1960]: E0128 02:00:41.146251 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jan 28 02:00:41.170468 kubelet[1960]: I0128 02:00:41.164296 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:41.236943 kubelet[1960]: E0128 02:00:41.214044 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:41.164254324 +0000 UTC m=+2.175966030,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.275484 kubelet[1960]: E0128 02:00:41.275434 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:41.313232 kubelet[1960]: E0128 02:00:41.312548 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:41.164260079 +0000 UTC m=+2.175971785,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.343209 kubelet[1960]: E0128 02:00:41.342344 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:41.350021 kubelet[1960]: E0128 02:00:41.347129 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:41.164263588 +0000 UTC m=+2.175975295,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.680552 kubelet[1960]: I0128 02:00:41.679315 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:41.762986 kubelet[1960]: E0128 02:00:41.760430 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:41.67926948 +0000 UTC m=+2.690981186,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.824071 kubelet[1960]: E0128 02:00:41.823109 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:41.679279677 +0000 UTC m=+2.690991583,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.825146 kubelet[1960]: E0128 02:00:41.824735 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:41.859191 kubelet[1960]: E0128 02:00:41.855709 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:41.679283657 +0000 UTC m=+2.690995362,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:41.859430 kubelet[1960]: W0128 02:00:41.858417 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 28 02:00:41.859430 kubelet[1960]: E0128 02:00:41.859422 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:41.945346 kubelet[1960]: W0128 02:00:41.939478 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 28 02:00:41.945346 kubelet[1960]: E0128 02:00:41.942678 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:42.007246 kubelet[1960]: E0128 02:00:42.004343 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Jan 28 02:00:42.008698 kubelet[1960]: W0128 02:00:42.008355 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 28 02:00:42.008698 kubelet[1960]: E0128 02:00:42.008437 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:42.334948 kubelet[1960]: W0128 02:00:42.333621 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 28 02:00:42.334948 kubelet[1960]: E0128 02:00:42.333706 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:42.343482 kubelet[1960]: E0128 02:00:42.343027 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:42.633065 kubelet[1960]: I0128 02:00:42.631969 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:42.675786 kubelet[1960]: E0128 02:00:42.675329 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:42.6317877 +0000 UTC m=+3.643499416,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:42.711014 kubelet[1960]: E0128 02:00:42.707983 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:42.631812153 +0000 UTC m=+3.643523899,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:42.734814 kubelet[1960]: E0128 02:00:42.732349 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:42.740124 kubelet[1960]: E0128 02:00:42.736375 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:42.631826178 +0000 UTC m=+3.643537914,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:43.347187 kubelet[1960]: E0128 02:00:43.343610 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:43.672242 kubelet[1960]: E0128 02:00:43.670662 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Jan 28 02:00:43.823962 kubelet[1960]: W0128 02:00:43.823108 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 28 02:00:43.823962 kubelet[1960]: E0128 02:00:43.823205 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:43.877906 kubelet[1960]: W0128 02:00:43.876966 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 28 02:00:43.877906 kubelet[1960]: E0128 02:00:43.877077 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:44.354518 kubelet[1960]: I0128 02:00:44.338047 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:44.354518 kubelet[1960]: E0128 02:00:44.350024 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:44.447719 kubelet[1960]: E0128 02:00:44.436507 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:44.337982923 +0000 UTC m=+5.349694630,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:44.611006 kubelet[1960]: E0128 02:00:44.608590 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:44.611006 kubelet[1960]: E0128 02:00:44.608719 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:44.337988767 +0000 UTC m=+5.349700464,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:44.645635 kubelet[1960]: E0128 02:00:44.645097 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:44.338008015 +0000 UTC m=+5.349719722,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:44.855966 kubelet[1960]: W0128 02:00:44.854593 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 28 02:00:44.855966 kubelet[1960]: E0128 02:00:44.854713 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:44.971820 kubelet[1960]: W0128 02:00:44.966609 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 28 02:00:44.971820 kubelet[1960]: E0128 02:00:44.971182 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:45.363428 kubelet[1960]: E0128 02:00:45.350507 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:46.360977 kubelet[1960]: E0128 02:00:46.358480 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:46.920965 kubelet[1960]: E0128 02:00:46.919424 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 28 02:00:47.359348 kubelet[1960]: E0128 02:00:47.358975 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:47.452673 kubelet[1960]: W0128 02:00:47.449436 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 28 02:00:47.452673 kubelet[1960]: E0128 02:00:47.452617 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:47.810730 kubelet[1960]: I0128 02:00:47.810134 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:47.827044 kubelet[1960]: E0128 02:00:47.826169 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:47.810087549 +0000 UTC m=+8.821799255,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:47.872790 kubelet[1960]: E0128 02:00:47.871682 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:47.810094164 +0000 UTC m=+8.821805870,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:47.872790 kubelet[1960]: E0128 02:00:47.872397 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:47.911779 kubelet[1960]: E0128 02:00:47.908543 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313359ec4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313359ec4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541445828 +0000 UTC m=+1.553157535,LastTimestamp:2026-01-28 02:00:47.810099367 +0000 UTC m=+8.821811073,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:48.365197 kubelet[1960]: E0128 02:00:48.363207 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:48.584727 kubelet[1960]: W0128 02:00:48.584625 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 28 02:00:48.584727 kubelet[1960]: E0128 02:00:48.584685 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:49.073293 kubelet[1960]: W0128 02:00:49.072250 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 28 02:00:49.073293 kubelet[1960]: E0128 02:00:49.072307 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:49.368472 kubelet[1960]: E0128 02:00:49.366524 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:50.370557 kubelet[1960]: E0128 02:00:50.370188 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:50.802680 kubelet[1960]: E0128 02:00:50.800757 1960 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 28 02:00:51.126580 kubelet[1960]: W0128 02:00:51.125768 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 28 02:00:51.131442 kubelet[1960]: E0128 02:00:51.130656 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:51.371556 kubelet[1960]: E0128 02:00:51.370677 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:52.374703 kubelet[1960]: E0128 02:00:52.374039 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:53.340692 kubelet[1960]: E0128 02:00:53.339187 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 28 02:00:53.376388 kubelet[1960]: E0128 02:00:53.375253 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:54.286098 kubelet[1960]: I0128 02:00:54.284180 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:00:54.335460 kubelet[1960]: E0128 02:00:54.335245 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313357071\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313357071 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541433969 +0000 UTC m=+1.553145675,LastTimestamp:2026-01-28 02:00:54.284139911 +0000 UTC m=+15.295851617,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:54.372568 kubelet[1960]: E0128 02:00:54.372225 1960 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.114.188ec29313358dc7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.188ec29313358dc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2026-01-28 02:00:40.541441479 +0000 UTC m=+1.553153184,LastTimestamp:2026-01-28 02:00:54.284145864 +0000 UTC m=+15.295857570,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Jan 28 02:00:54.378595 kubelet[1960]: E0128 02:00:54.378538 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:54.387705 kubelet[1960]: E0128 02:00:54.386811 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:00:55.380610 kubelet[1960]: E0128 02:00:55.380016 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:56.391259 kubelet[1960]: E0128 02:00:56.388608 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:56.711592 update_engine[1573]: I20260128 02:00:56.708420 1573 update_attempter.cc:509] Updating boot flags... Jan 28 02:00:57.393601 kubelet[1960]: E0128 02:00:57.393429 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:58.402588 kubelet[1960]: E0128 02:00:58.395727 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:58.516034 kubelet[1960]: W0128 02:00:58.514407 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 28 02:00:58.516034 kubelet[1960]: E0128 02:00:58.514460 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:59.366748 kubelet[1960]: W0128 02:00:59.366527 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 28 02:00:59.366748 kubelet[1960]: E0128 02:00:59.366608 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:00:59.406034 kubelet[1960]: E0128 02:00:59.405401 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:00:59.426079 kubelet[1960]: W0128 02:00:59.425809 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 28 02:00:59.426079 kubelet[1960]: E0128 02:00:59.426055 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 28 02:01:00.340441 kubelet[1960]: E0128 02:01:00.340230 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:00.407584 kubelet[1960]: E0128 02:01:00.406669 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:00.808148 kubelet[1960]: E0128 02:01:00.807987 1960 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 28 02:01:00.889657 kubelet[1960]: E0128 02:01:00.886295 1960 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 28 02:01:01.399386 kubelet[1960]: I0128 02:01:01.398945 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:01:01.410143 kubelet[1960]: E0128 02:01:01.408743 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:01.713801 kubelet[1960]: E0128 02:01:01.712083 1960 kubelet_node_status.go:113] "Unable to register node with API server, error getting existing node" err="nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.114" Jan 28 02:01:02.168080 kubelet[1960]: W0128 02:01:02.166620 1960 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 28 02:01:02.168080 kubelet[1960]: E0128 02:01:02.167958 1960 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 28 02:01:02.413994 kubelet[1960]: E0128 02:01:02.413461 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:03.416071 kubelet[1960]: E0128 02:01:03.413709 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:04.418311 kubelet[1960]: E0128 02:01:04.416313 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:05.422569 kubelet[1960]: E0128 02:01:05.420983 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:05.976470 kubelet[1960]: E0128 02:01:05.971655 1960 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.114" not found Jan 28 02:01:06.112976 kubelet[1960]: I0128 02:01:06.109415 1960 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 02:01:06.423004 kubelet[1960]: E0128 02:01:06.422078 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:07.427772 kubelet[1960]: E0128 02:01:07.423084 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:08.177085 kubelet[1960]: E0128 02:01:08.170501 1960 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.114\" not found" node="10.0.0.114" Jan 28 02:01:08.428237 kubelet[1960]: E0128 02:01:08.427321 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:08.724407 kubelet[1960]: I0128 02:01:08.724030 1960 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.114" Jan 28 02:01:09.294079 kubelet[1960]: I0128 02:01:09.287019 1960 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.114" Jan 28 02:01:09.294079 kubelet[1960]: E0128 02:01:09.287086 1960 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.114\": node \"10.0.0.114\" not found" Jan 28 02:01:09.429123 kubelet[1960]: E0128 02:01:09.428196 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:10.436143 kubelet[1960]: E0128 02:01:10.430187 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:10.812296 kubelet[1960]: E0128 02:01:10.811272 1960 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 28 02:01:11.124000 audit[1806]: USER_END pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.125602 sudo[1806]: pam_unix(sudo:session): session closed for user root Jan 28 02:01:11.141374 kernel: kauditd_printk_skb: 71 callbacks suppressed Jan 28 02:01:11.141462 kernel: audit: type=1106 audit(1769565671.124:305): pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.178677 sshd[1805]: Connection closed by 10.0.0.1 port 55160 Jan 28 02:01:11.182986 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:11.227696 kubelet[1960]: E0128 02:01:11.227647 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.244630 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:55160.service: Deactivated successfully. Jan 28 02:01:11.125000 audit[1806]: CRED_DISP pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.253069 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 02:01:11.254320 systemd[1]: session-8.scope: Consumed 1.738s CPU time, 77.5M memory peak. Jan 28 02:01:11.266659 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 28 02:01:11.279516 systemd-logind[1569]: Removed session 8. Jan 28 02:01:11.322066 kernel: audit: type=1104 audit(1769565671.125:306): pid=1806 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.329454 kernel: audit: type=1106 audit(1769565671.208:307): pid=1801 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:01:11.208000 audit[1801]: USER_END pid=1801 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:01:11.329782 kubelet[1960]: E0128 02:01:11.329032 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.208000 audit[1801]: CRED_DISP pid=1801 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:01:11.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.114:22-10.0.0.1:55160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.430379 kernel: audit: type=1104 audit(1769565671.208:308): pid=1801 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 28 02:01:11.430493 kernel: audit: type=1131 audit(1769565671.242:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.114:22-10.0.0.1:55160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 02:01:11.430528 kubelet[1960]: E0128 02:01:11.429519 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.431434 kubelet[1960]: E0128 02:01:11.431158 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:11.530518 kubelet[1960]: E0128 02:01:11.530314 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.639537 kubelet[1960]: E0128 02:01:11.637336 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.744140 kubelet[1960]: E0128 02:01:11.740251 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.842224 kubelet[1960]: E0128 02:01:11.840833 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:11.946562 kubelet[1960]: E0128 02:01:11.943414 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.045008 kubelet[1960]: E0128 02:01:12.044587 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.148184 kubelet[1960]: E0128 02:01:12.146368 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.248353 kubelet[1960]: E0128 02:01:12.247599 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.350147 kubelet[1960]: E0128 02:01:12.348123 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.437305 kubelet[1960]: E0128 02:01:12.437148 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:12.453498 kubelet[1960]: E0128 02:01:12.453150 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.554734 kubelet[1960]: E0128 02:01:12.554426 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.656294 kubelet[1960]: E0128 02:01:12.656141 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.760109 kubelet[1960]: E0128 02:01:12.757532 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.861556 kubelet[1960]: E0128 02:01:12.861322 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:12.963359 kubelet[1960]: E0128 02:01:12.963219 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.071010 kubelet[1960]: E0128 02:01:13.069062 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.174273 kubelet[1960]: E0128 02:01:13.172051 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.279834 kubelet[1960]: E0128 02:01:13.273238 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.378993 kubelet[1960]: E0128 02:01:13.378399 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.438752 kubelet[1960]: E0128 02:01:13.438412 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:13.482146 kubelet[1960]: E0128 02:01:13.481710 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.584122 kubelet[1960]: E0128 02:01:13.583197 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.686203 kubelet[1960]: E0128 02:01:13.684384 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.788379 kubelet[1960]: E0128 02:01:13.786135 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.888826 kubelet[1960]: E0128 02:01:13.888345 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:13.994472 kubelet[1960]: E0128 02:01:13.989675 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.090895 kubelet[1960]: E0128 02:01:14.090238 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.193250 kubelet[1960]: E0128 02:01:14.191616 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.293494 kubelet[1960]: E0128 02:01:14.292988 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.393551 kubelet[1960]: E0128 02:01:14.393494 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.441413 kubelet[1960]: E0128 02:01:14.441346 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:14.494394 kubelet[1960]: E0128 02:01:14.493692 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.597770 kubelet[1960]: E0128 02:01:14.596554 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.699317 kubelet[1960]: E0128 02:01:14.697940 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.799025 kubelet[1960]: E0128 02:01:14.798418 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:14.902355 kubelet[1960]: E0128 02:01:14.899101 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.002507 kubelet[1960]: E0128 02:01:14.999561 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.101533 kubelet[1960]: E0128 02:01:15.101033 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.203498 kubelet[1960]: E0128 02:01:15.203001 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.308082 kubelet[1960]: E0128 02:01:15.307011 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.409297 kubelet[1960]: E0128 02:01:15.408332 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.446733 kubelet[1960]: E0128 02:01:15.443507 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:15.515595 kubelet[1960]: E0128 02:01:15.513195 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.614097 kubelet[1960]: E0128 02:01:15.613499 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.717471 kubelet[1960]: E0128 02:01:15.714184 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.820979 kubelet[1960]: E0128 02:01:15.819440 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:15.923703 kubelet[1960]: E0128 02:01:15.922808 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.027214 kubelet[1960]: E0128 02:01:16.024111 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.125740 kubelet[1960]: E0128 02:01:16.124753 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.229711 kubelet[1960]: E0128 02:01:16.226074 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.334177 kubelet[1960]: E0128 02:01:16.332139 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.444539 kubelet[1960]: E0128 02:01:16.439559 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.450035 kubelet[1960]: E0128 02:01:16.447361 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:16.543007 kubelet[1960]: E0128 02:01:16.540012 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.644777 kubelet[1960]: E0128 02:01:16.643137 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.748091 kubelet[1960]: E0128 02:01:16.744328 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.849179 kubelet[1960]: E0128 02:01:16.848118 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:16.954338 kubelet[1960]: E0128 02:01:16.949738 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.058652 kubelet[1960]: E0128 02:01:17.055014 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.158468 kubelet[1960]: E0128 02:01:17.155153 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.261260 kubelet[1960]: E0128 02:01:17.261045 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.364164 kubelet[1960]: E0128 02:01:17.363036 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.455753 kubelet[1960]: E0128 02:01:17.452417 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:17.467994 kubelet[1960]: E0128 02:01:17.464097 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.565531 kubelet[1960]: E0128 02:01:17.564454 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.665129 kubelet[1960]: E0128 02:01:17.665061 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.765758 kubelet[1960]: E0128 02:01:17.765694 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.875677 kubelet[1960]: E0128 02:01:17.866171 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:17.971153 kubelet[1960]: E0128 02:01:17.969316 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.098955 kubelet[1960]: E0128 02:01:18.094483 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.197574 kubelet[1960]: E0128 02:01:18.197025 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.301810 kubelet[1960]: E0128 02:01:18.298126 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.401417 kubelet[1960]: E0128 02:01:18.401244 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.458810 kubelet[1960]: E0128 02:01:18.455191 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:18.505567 kubelet[1960]: E0128 02:01:18.504383 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.614958 kubelet[1960]: E0128 02:01:18.605096 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.715085 kubelet[1960]: E0128 02:01:18.714040 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.815925 kubelet[1960]: E0128 02:01:18.815467 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:18.922085 kubelet[1960]: E0128 02:01:18.921127 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.023543 kubelet[1960]: E0128 02:01:19.022582 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.128994 kubelet[1960]: E0128 02:01:19.126981 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.228272 kubelet[1960]: E0128 02:01:19.228091 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.332991 kubelet[1960]: E0128 02:01:19.330728 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.435421 kubelet[1960]: E0128 02:01:19.433987 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.462656 kubelet[1960]: E0128 02:01:19.462290 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:19.537488 kubelet[1960]: E0128 02:01:19.537103 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.640184 kubelet[1960]: E0128 02:01:19.639619 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.745495 kubelet[1960]: E0128 02:01:19.743108 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.849530 kubelet[1960]: E0128 02:01:19.846220 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:19.949224 kubelet[1960]: E0128 02:01:19.948115 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.054823 kubelet[1960]: E0128 02:01:20.054508 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.157358 kubelet[1960]: E0128 02:01:20.156246 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.261518 kubelet[1960]: E0128 02:01:20.258640 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.340477 kubelet[1960]: E0128 02:01:20.340110 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:20.362300 kubelet[1960]: E0128 02:01:20.361190 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.464393 kubelet[1960]: E0128 02:01:20.463176 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:20.464393 kubelet[1960]: E0128 02:01:20.463276 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.567721 kubelet[1960]: E0128 02:01:20.563521 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.666595 kubelet[1960]: E0128 02:01:20.665473 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.767236 kubelet[1960]: E0128 02:01:20.766185 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.814343 kubelet[1960]: E0128 02:01:20.813405 1960 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 28 02:01:20.871146 kubelet[1960]: E0128 02:01:20.868431 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:20.972653 kubelet[1960]: E0128 02:01:20.971748 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.076930 kubelet[1960]: E0128 02:01:21.073527 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.180722 kubelet[1960]: E0128 02:01:21.179397 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.244750 kubelet[1960]: E0128 02:01:21.244220 1960 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.114\": node \"10.0.0.114\" not found" Jan 28 02:01:21.288269 kubelet[1960]: I0128 02:01:21.285216 1960 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 28 02:01:21.312512 containerd[1601]: time="2026-01-28T02:01:21.310169541Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 02:01:21.313250 kubelet[1960]: I0128 02:01:21.311259 1960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 28 02:01:21.393610 kubelet[1960]: E0128 02:01:21.392335 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.465607 kubelet[1960]: E0128 02:01:21.464739 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:21.492944 kubelet[1960]: E0128 02:01:21.492593 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.594052 kubelet[1960]: E0128 02:01:21.592828 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.694452 kubelet[1960]: E0128 02:01:21.694041 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.802331 kubelet[1960]: E0128 02:01:21.797760 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:21.899053 kubelet[1960]: E0128 02:01:21.898988 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:22.002523 kubelet[1960]: E0128 02:01:22.001400 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:22.105564 kubelet[1960]: E0128 02:01:22.102981 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:22.206524 kubelet[1960]: E0128 02:01:22.206082 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 28 02:01:22.438121 kubelet[1960]: I0128 02:01:22.431734 1960 apiserver.go:52] "Watching apiserver" Jan 28 02:01:22.475015 kubelet[1960]: E0128 02:01:22.471555 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:22.490970 kubelet[1960]: I0128 02:01:22.490707 1960 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 02:01:22.570731 kubelet[1960]: I0128 02:01:22.567651 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bbe2723-98e7-4997-96e2-5de940303a07-kube-proxy\") pod \"kube-proxy-rbscm\" (UID: \"5bbe2723-98e7-4997-96e2-5de940303a07\") " pod="kube-system/kube-proxy-rbscm" Jan 28 02:01:22.570731 kubelet[1960]: I0128 02:01:22.567693 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bbe2723-98e7-4997-96e2-5de940303a07-xtables-lock\") pod \"kube-proxy-rbscm\" (UID: \"5bbe2723-98e7-4997-96e2-5de940303a07\") " pod="kube-system/kube-proxy-rbscm" Jan 28 02:01:22.570731 kubelet[1960]: I0128 02:01:22.567728 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bbe2723-98e7-4997-96e2-5de940303a07-lib-modules\") pod \"kube-proxy-rbscm\" (UID: \"5bbe2723-98e7-4997-96e2-5de940303a07\") " pod="kube-system/kube-proxy-rbscm" Jan 28 02:01:22.570731 kubelet[1960]: I0128 02:01:22.567756 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27wh\" (UniqueName: \"kubernetes.io/projected/5bbe2723-98e7-4997-96e2-5de940303a07-kube-api-access-g27wh\") pod \"kube-proxy-rbscm\" (UID: \"5bbe2723-98e7-4997-96e2-5de940303a07\") " pod="kube-system/kube-proxy-rbscm" Jan 28 02:01:22.575199 systemd[1]: Created slice kubepods-besteffort-pod5bbe2723_98e7_4997_96e2_5de940303a07.slice - libcontainer container kubepods-besteffort-pod5bbe2723_98e7_4997_96e2_5de940303a07.slice. Jan 28 02:01:22.927560 kubelet[1960]: E0128 02:01:22.926160 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:22.931507 containerd[1601]: time="2026-01-28T02:01:22.930342380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbscm,Uid:5bbe2723-98e7-4997-96e2-5de940303a07,Namespace:kube-system,Attempt:0,}" Jan 28 02:01:23.475156 kubelet[1960]: E0128 02:01:23.475119 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:23.721106 kubelet[1960]: I0128 02:01:23.720776 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-cni-net-dir\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721106 kubelet[1960]: I0128 02:01:23.721080 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-flexvol-driver-host\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721273 kubelet[1960]: I0128 02:01:23.721247 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-var-lib-calico\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721320 kubelet[1960]: I0128 02:01:23.721278 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-var-run-calico\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721320 kubelet[1960]: I0128 02:01:23.721306 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-cni-bin-dir\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721494 kubelet[1960]: I0128 02:01:23.721462 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-cni-log-dir\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721535 kubelet[1960]: I0128 02:01:23.721514 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-lib-modules\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721574 kubelet[1960]: I0128 02:01:23.721544 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-node-certs\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721626 kubelet[1960]: I0128 02:01:23.721606 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-xtables-lock\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721660 kubelet[1960]: I0128 02:01:23.721635 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-policysync\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.721689 kubelet[1960]: I0128 02:01:23.721660 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-tigera-ca-bundle\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.722972 kubelet[1960]: I0128 02:01:23.721734 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqhg\" (UniqueName: \"kubernetes.io/projected/a0a03517-0efb-4ce9-afe7-96f25a1e9b09-kube-api-access-8gqhg\") pod \"calico-node-kt9ff\" (UID: \"a0a03517-0efb-4ce9-afe7-96f25a1e9b09\") " pod="calico-system/calico-node-kt9ff" Jan 28 02:01:23.722998 systemd[1]: Created slice kubepods-besteffort-poda0a03517_0efb_4ce9_afe7_96f25a1e9b09.slice - libcontainer container kubepods-besteffort-poda0a03517_0efb_4ce9_afe7_96f25a1e9b09.slice. Jan 28 02:01:23.841678 kubelet[1960]: E0128 02:01:23.841490 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:23.841678 kubelet[1960]: W0128 02:01:23.841523 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:23.841678 kubelet[1960]: E0128 02:01:23.841559 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:23.881133 kubelet[1960]: E0128 02:01:23.877835 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:23.881133 kubelet[1960]: W0128 02:01:23.880562 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:23.881133 kubelet[1960]: E0128 02:01:23.880688 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.005153 kubelet[1960]: E0128 02:01:24.005097 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:24.029207 kubelet[1960]: E0128 02:01:24.029133 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.029207 kubelet[1960]: W0128 02:01:24.029162 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.029207 kubelet[1960]: E0128 02:01:24.029188 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.031666 kubelet[1960]: E0128 02:01:24.031063 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.031666 kubelet[1960]: W0128 02:01:24.031131 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.031666 kubelet[1960]: E0128 02:01:24.031148 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.032654 kubelet[1960]: E0128 02:01:24.032632 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.032761 kubelet[1960]: W0128 02:01:24.032744 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.033612 kubelet[1960]: E0128 02:01:24.032830 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.035029 kubelet[1960]: E0128 02:01:24.033975 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.035029 kubelet[1960]: W0128 02:01:24.034057 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.035029 kubelet[1960]: E0128 02:01:24.034083 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.035029 kubelet[1960]: E0128 02:01:24.034958 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.035029 kubelet[1960]: W0128 02:01:24.034974 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.035029 kubelet[1960]: E0128 02:01:24.034990 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.035233 kubelet[1960]: E0128 02:01:24.035211 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.035233 kubelet[1960]: W0128 02:01:24.035225 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.035296 kubelet[1960]: E0128 02:01:24.035239 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.037572 kubelet[1960]: E0128 02:01:24.037546 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.038210 kubelet[1960]: W0128 02:01:24.037678 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.038210 kubelet[1960]: E0128 02:01:24.037752 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.038210 kubelet[1960]: E0128 02:01:24.038213 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.038335 kubelet[1960]: W0128 02:01:24.038224 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.038335 kubelet[1960]: E0128 02:01:24.038237 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.041068 kubelet[1960]: E0128 02:01:24.040773 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.041068 kubelet[1960]: W0128 02:01:24.040938 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.041068 kubelet[1960]: E0128 02:01:24.040957 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.041242 kubelet[1960]: E0128 02:01:24.041193 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.041242 kubelet[1960]: W0128 02:01:24.041203 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.041242 kubelet[1960]: E0128 02:01:24.041214 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.045004 kubelet[1960]: E0128 02:01:24.044982 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.047487 kubelet[1960]: W0128 02:01:24.045046 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.045060 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.046068 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.047487 kubelet[1960]: W0128 02:01:24.046080 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.046092 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.046330 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.047487 kubelet[1960]: W0128 02:01:24.046339 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.046349 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.047487 kubelet[1960]: E0128 02:01:24.046633 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.047487 kubelet[1960]: W0128 02:01:24.046643 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.047781 kubelet[1960]: E0128 02:01:24.046653 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.052083 kubelet[1960]: E0128 02:01:24.051996 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.052083 kubelet[1960]: W0128 02:01:24.052222 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.052083 kubelet[1960]: E0128 02:01:24.052237 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.065147 kubelet[1960]: E0128 02:01:24.058389 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.065147 kubelet[1960]: W0128 02:01:24.058474 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.065147 kubelet[1960]: E0128 02:01:24.058489 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.065147 kubelet[1960]: E0128 02:01:24.061275 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.065147 kubelet[1960]: W0128 02:01:24.061287 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.065147 kubelet[1960]: E0128 02:01:24.061741 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.065147 kubelet[1960]: I0128 02:01:24.061772 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15b582de-4a9d-49bf-b8af-da9b7c0dc36f-kubelet-dir\") pod \"csi-node-driver-krgpk\" (UID: \"15b582de-4a9d-49bf-b8af-da9b7c0dc36f\") " pod="calico-system/csi-node-driver-krgpk" Jan 28 02:01:24.065147 kubelet[1960]: E0128 02:01:24.062007 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.065147 kubelet[1960]: W0128 02:01:24.062017 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.068627 kubelet[1960]: E0128 02:01:24.062029 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.068627 kubelet[1960]: E0128 02:01:24.062278 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.068627 kubelet[1960]: W0128 02:01:24.062287 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.068627 kubelet[1960]: E0128 02:01:24.062298 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.068627 kubelet[1960]: E0128 02:01:24.065778 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.068627 kubelet[1960]: W0128 02:01:24.065789 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.068985 kubelet[1960]: E0128 02:01:24.068717 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.071910 kubelet[1960]: E0128 02:01:24.070478 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.071910 kubelet[1960]: W0128 02:01:24.070547 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.071910 kubelet[1960]: E0128 02:01:24.070714 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.071910 kubelet[1960]: I0128 02:01:24.070741 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15b582de-4a9d-49bf-b8af-da9b7c0dc36f-socket-dir\") pod \"csi-node-driver-krgpk\" (UID: \"15b582de-4a9d-49bf-b8af-da9b7c0dc36f\") " pod="calico-system/csi-node-driver-krgpk" Jan 28 02:01:24.074672 kubelet[1960]: E0128 02:01:24.073932 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.074672 kubelet[1960]: W0128 02:01:24.073999 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.074672 kubelet[1960]: E0128 02:01:24.074116 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.078149 kubelet[1960]: E0128 02:01:24.077373 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.078149 kubelet[1960]: W0128 02:01:24.077504 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.081379 kubelet[1960]: E0128 02:01:24.081064 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.087146 kubelet[1960]: E0128 02:01:24.086182 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.087146 kubelet[1960]: W0128 02:01:24.086256 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.087146 kubelet[1960]: E0128 02:01:24.086676 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.087146 kubelet[1960]: I0128 02:01:24.086734 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15b582de-4a9d-49bf-b8af-da9b7c0dc36f-registration-dir\") pod \"csi-node-driver-krgpk\" (UID: \"15b582de-4a9d-49bf-b8af-da9b7c0dc36f\") " pod="calico-system/csi-node-driver-krgpk" Jan 28 02:01:24.089661 kubelet[1960]: E0128 02:01:24.088021 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.089661 kubelet[1960]: W0128 02:01:24.088035 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.089661 kubelet[1960]: E0128 02:01:24.088286 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.091307 kubelet[1960]: E0128 02:01:24.090666 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.091307 kubelet[1960]: W0128 02:01:24.090755 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.091307 kubelet[1960]: E0128 02:01:24.090770 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.092587 kubelet[1960]: E0128 02:01:24.092353 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.092587 kubelet[1960]: W0128 02:01:24.092365 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.092587 kubelet[1960]: E0128 02:01:24.092376 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.095225 kubelet[1960]: I0128 02:01:24.094766 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15b582de-4a9d-49bf-b8af-da9b7c0dc36f-varrun\") pod \"csi-node-driver-krgpk\" (UID: \"15b582de-4a9d-49bf-b8af-da9b7c0dc36f\") " pod="calico-system/csi-node-driver-krgpk" Jan 28 02:01:24.099065 kubelet[1960]: E0128 02:01:24.098811 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.099065 kubelet[1960]: W0128 02:01:24.099017 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.099065 kubelet[1960]: E0128 02:01:24.099038 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.101400 kubelet[1960]: E0128 02:01:24.099505 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.101400 kubelet[1960]: W0128 02:01:24.099580 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.101400 kubelet[1960]: E0128 02:01:24.099596 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.106045 kubelet[1960]: E0128 02:01:24.105023 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.106045 kubelet[1960]: W0128 02:01:24.105090 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.106045 kubelet[1960]: E0128 02:01:24.105569 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.113315 kubelet[1960]: E0128 02:01:24.109152 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.113315 kubelet[1960]: W0128 02:01:24.110833 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.113315 kubelet[1960]: E0128 02:01:24.111222 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.113315 kubelet[1960]: E0128 02:01:24.112206 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.113315 kubelet[1960]: W0128 02:01:24.112218 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.113315 kubelet[1960]: E0128 02:01:24.112231 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.206318 kubelet[1960]: E0128 02:01:24.206211 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.206318 kubelet[1960]: W0128 02:01:24.206240 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.206318 kubelet[1960]: E0128 02:01:24.206263 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.216256 kubelet[1960]: E0128 02:01:24.215267 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.216256 kubelet[1960]: W0128 02:01:24.215355 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.216256 kubelet[1960]: E0128 02:01:24.215521 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.225653 kubelet[1960]: E0128 02:01:24.224589 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.225653 kubelet[1960]: W0128 02:01:24.224610 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.225653 kubelet[1960]: E0128 02:01:24.225093 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.241026 kubelet[1960]: E0128 02:01:24.237234 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.241026 kubelet[1960]: W0128 02:01:24.237326 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.241026 kubelet[1960]: E0128 02:01:24.237417 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.241026 kubelet[1960]: E0128 02:01:24.238801 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.241026 kubelet[1960]: W0128 02:01:24.238811 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.241669 kubelet[1960]: E0128 02:01:24.241329 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.252344 kubelet[1960]: E0128 02:01:24.252204 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.252496 kubelet[1960]: W0128 02:01:24.252349 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.253129 kubelet[1960]: E0128 02:01:24.252654 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.260526 kubelet[1960]: E0128 02:01:24.259119 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.260526 kubelet[1960]: W0128 02:01:24.259188 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.260526 kubelet[1960]: E0128 02:01:24.259669 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.263327 kubelet[1960]: E0128 02:01:24.261186 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.263327 kubelet[1960]: W0128 02:01:24.261248 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.263327 kubelet[1960]: E0128 02:01:24.261661 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.269074 kubelet[1960]: E0128 02:01:24.268177 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.269074 kubelet[1960]: W0128 02:01:24.268251 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.269074 kubelet[1960]: E0128 02:01:24.268503 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.271501 kubelet[1960]: E0128 02:01:24.270146 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.271501 kubelet[1960]: W0128 02:01:24.270161 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.271796 kubelet[1960]: E0128 02:01:24.271762 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.273497 kubelet[1960]: E0128 02:01:24.272769 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.273497 kubelet[1960]: W0128 02:01:24.272783 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.273497 kubelet[1960]: E0128 02:01:24.272999 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.273497 kubelet[1960]: I0128 02:01:24.273033 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqrj\" (UniqueName: \"kubernetes.io/projected/15b582de-4a9d-49bf-b8af-da9b7c0dc36f-kube-api-access-dxqrj\") pod \"csi-node-driver-krgpk\" (UID: \"15b582de-4a9d-49bf-b8af-da9b7c0dc36f\") " pod="calico-system/csi-node-driver-krgpk" Jan 28 02:01:24.275498 kubelet[1960]: E0128 02:01:24.274801 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.275498 kubelet[1960]: W0128 02:01:24.274813 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.275498 kubelet[1960]: E0128 02:01:24.275121 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.279914 kubelet[1960]: E0128 02:01:24.279600 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.279914 kubelet[1960]: W0128 02:01:24.279672 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.280598 kubelet[1960]: E0128 02:01:24.280318 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.281361 kubelet[1960]: E0128 02:01:24.281280 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.281361 kubelet[1960]: W0128 02:01:24.281298 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.281569 kubelet[1960]: E0128 02:01:24.281550 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.285310 kubelet[1960]: E0128 02:01:24.285239 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.285310 kubelet[1960]: W0128 02:01:24.285259 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.285523 kubelet[1960]: E0128 02:01:24.285423 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.287621 kubelet[1960]: E0128 02:01:24.287365 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.287621 kubelet[1960]: W0128 02:01:24.287379 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.287994 kubelet[1960]: E0128 02:01:24.287975 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.295249 kubelet[1960]: E0128 02:01:24.289341 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.295249 kubelet[1960]: W0128 02:01:24.293103 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.308665 kubelet[1960]: E0128 02:01:24.308640 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.309060 kubelet[1960]: W0128 02:01:24.308764 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.309283 kubelet[1960]: E0128 02:01:24.309266 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.309360 kubelet[1960]: W0128 02:01:24.309345 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.309730 kubelet[1960]: E0128 02:01:24.309715 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.309826 kubelet[1960]: W0128 02:01:24.309811 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.310233 kubelet[1960]: E0128 02:01:24.310218 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.310306 kubelet[1960]: W0128 02:01:24.310292 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.310378 kubelet[1960]: E0128 02:01:24.310358 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.310651 kubelet[1960]: E0128 02:01:24.310632 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.314211 kubelet[1960]: E0128 02:01:24.312577 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.314211 kubelet[1960]: E0128 02:01:24.312690 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.314211 kubelet[1960]: E0128 02:01:24.312727 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.323298 kubelet[1960]: E0128 02:01:24.323272 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.323736 kubelet[1960]: W0128 02:01:24.323384 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.323736 kubelet[1960]: E0128 02:01:24.323535 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.328398 kubelet[1960]: E0128 02:01:24.328147 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.328398 kubelet[1960]: W0128 02:01:24.328164 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.328398 kubelet[1960]: E0128 02:01:24.328180 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.331044 kubelet[1960]: E0128 02:01:24.329663 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.331727 kubelet[1960]: W0128 02:01:24.331522 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.331727 kubelet[1960]: E0128 02:01:24.331608 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.353810 kubelet[1960]: E0128 02:01:24.350122 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:24.354076 containerd[1601]: time="2026-01-28T02:01:24.352090992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kt9ff,Uid:a0a03517-0efb-4ce9-afe7-96f25a1e9b09,Namespace:calico-system,Attempt:0,}" Jan 28 02:01:24.379305 kubelet[1960]: E0128 02:01:24.377589 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.379305 kubelet[1960]: W0128 02:01:24.377615 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.379305 kubelet[1960]: E0128 02:01:24.377640 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.386664 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.392647 kubelet[1960]: W0128 02:01:24.386691 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.386715 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.387113 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.392647 kubelet[1960]: W0128 02:01:24.387125 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.387141 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.391293 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.392647 kubelet[1960]: W0128 02:01:24.391307 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.392647 kubelet[1960]: E0128 02:01:24.391324 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.396132 kubelet[1960]: E0128 02:01:24.395296 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.396132 kubelet[1960]: W0128 02:01:24.395370 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.396132 kubelet[1960]: E0128 02:01:24.395388 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.478010 kubelet[1960]: E0128 02:01:24.475801 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:24.572319 kubelet[1960]: E0128 02:01:24.571421 1960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:01:24.572319 kubelet[1960]: W0128 02:01:24.571704 1960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:01:24.572319 kubelet[1960]: E0128 02:01:24.571828 1960 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:01:24.990640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928636491.mount: Deactivated successfully. Jan 28 02:01:25.065070 containerd[1601]: time="2026-01-28T02:01:25.062029408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:01:25.075385 containerd[1601]: time="2026-01-28T02:01:25.074222057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 02:01:25.081259 containerd[1601]: time="2026-01-28T02:01:25.080669385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:01:25.095600 containerd[1601]: time="2026-01-28T02:01:25.095399193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 02:01:25.099622 containerd[1601]: time="2026-01-28T02:01:25.098755547Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:01:25.110956 containerd[1601]: time="2026-01-28T02:01:25.110427276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:01:25.114646 containerd[1601]: time="2026-01-28T02:01:25.112829668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.047428404s" Jan 28 02:01:25.116680 containerd[1601]: time="2026-01-28T02:01:25.115730658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 726.924945ms" Jan 28 02:01:25.287318 containerd[1601]: time="2026-01-28T02:01:25.286498499Z" level=info msg="connecting to shim 7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5" address="unix:///run/containerd/s/a207b2290cacd6be4ade278567ccf17e2980d2afc34fbeb68cfeb5596dd10f31" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:01:25.303694 containerd[1601]: time="2026-01-28T02:01:25.302521107Z" level=info msg="connecting to shim f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8" address="unix:///run/containerd/s/ad972c14b38099c7d6e485f01921268b51bcf7cc9b505262c4aab5efb1719502" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:01:25.438214 systemd[1]: Started cri-containerd-7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5.scope - libcontainer container 7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5. Jan 28 02:01:25.484125 kubelet[1960]: E0128 02:01:25.484067 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:25.489643 systemd[1]: Started cri-containerd-f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8.scope - libcontainer container f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8. Jan 28 02:01:25.509000 audit: BPF prog-id=83 op=LOAD Jan 28 02:01:25.534087 kernel: audit: type=1334 audit(1769565685.509:310): prog-id=83 op=LOAD Jan 28 02:01:25.512000 audit: BPF prog-id=84 op=LOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.627683 kernel: audit: type=1334 audit(1769565685.512:311): prog-id=84 op=LOAD Jan 28 02:01:25.627776 kernel: audit: type=1300 audit(1769565685.512:311): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.671666 kernel: audit: type=1327 audit(1769565685.512:311): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.671784 kernel: audit: type=1334 audit(1769565685.512:312): prog-id=84 op=UNLOAD Jan 28 02:01:25.512000 audit: BPF prog-id=84 op=UNLOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.802082 kernel: audit: type=1300 audit(1769565685.512:312): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.802155 kernel: audit: type=1327 audit(1769565685.512:312): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.512000 audit: BPF prog-id=85 op=LOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.919508 kubelet[1960]: E0128 02:01:25.918209 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:25.925716 kernel: audit: type=1334 audit(1769565685.512:313): prog-id=85 op=LOAD Jan 28 02:01:25.925776 kernel: audit: type=1300 audit(1769565685.512:313): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.955983 kernel: audit: type=1327 audit(1769565685.512:313): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.512000 audit: BPF prog-id=86 op=LOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.512000 audit: BPF prog-id=86 op=UNLOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.512000 audit: BPF prog-id=85 op=UNLOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.512000 audit: BPF prog-id=87 op=LOAD Jan 28 02:01:25.512000 audit[2131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2112 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386163336332343236393039613634656632313734653430376330 Jan 28 02:01:25.561000 audit: BPF prog-id=88 op=LOAD Jan 28 02:01:25.569000 audit: BPF prog-id=89 op=LOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=89 op=UNLOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=90 op=LOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=91 op=LOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=91 op=UNLOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=90 op=UNLOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.569000 audit: BPF prog-id=92 op=LOAD Jan 28 02:01:25.569000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2109 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:25.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633633336656232613437373863373866363161313562313362613931 Jan 28 02:01:25.998074 containerd[1601]: time="2026-01-28T02:01:25.994013025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kt9ff,Uid:a0a03517-0efb-4ce9-afe7-96f25a1e9b09,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\"" Jan 28 02:01:25.998554 kubelet[1960]: E0128 02:01:25.996237 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:26.012060 containerd[1601]: time="2026-01-28T02:01:26.011214519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbscm,Uid:5bbe2723-98e7-4997-96e2-5de940303a07,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8\"" Jan 28 02:01:26.012060 containerd[1601]: time="2026-01-28T02:01:26.011435086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 02:01:26.026027 kubelet[1960]: E0128 02:01:26.025395 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:26.484651 kubelet[1960]: E0128 02:01:26.484613 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:27.363154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715385439.mount: Deactivated successfully. Jan 28 02:01:27.488373 kubelet[1960]: E0128 02:01:27.486439 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:27.916424 kubelet[1960]: E0128 02:01:27.916261 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:28.224218 containerd[1601]: time="2026-01-28T02:01:28.223692301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:28.230576 containerd[1601]: time="2026-01-28T02:01:28.230545262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 02:01:28.234650 containerd[1601]: time="2026-01-28T02:01:28.233417266Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:28.249821 containerd[1601]: time="2026-01-28T02:01:28.249765567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:28.255462 containerd[1601]: time="2026-01-28T02:01:28.253611696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.242027985s" Jan 28 02:01:28.255462 containerd[1601]: time="2026-01-28T02:01:28.255188031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 02:01:28.273644 containerd[1601]: time="2026-01-28T02:01:28.273327922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 02:01:28.277606 containerd[1601]: time="2026-01-28T02:01:28.276642859Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 02:01:28.409538 containerd[1601]: time="2026-01-28T02:01:28.407446839Z" level=info msg="Container 5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:01:28.493798 kubelet[1960]: E0128 02:01:28.490461 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:28.580076 containerd[1601]: time="2026-01-28T02:01:28.577520768Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d\"" Jan 28 02:01:28.595681 containerd[1601]: time="2026-01-28T02:01:28.595635059Z" level=info msg="StartContainer for \"5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d\"" Jan 28 02:01:28.626520 containerd[1601]: time="2026-01-28T02:01:28.623635342Z" level=info msg="connecting to shim 5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d" address="unix:///run/containerd/s/a207b2290cacd6be4ade278567ccf17e2980d2afc34fbeb68cfeb5596dd10f31" protocol=ttrpc version=3 Jan 28 02:01:28.821078 systemd[1]: Started cri-containerd-5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d.scope - libcontainer container 5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d. Jan 28 02:01:29.029000 audit: BPF prog-id=93 op=LOAD Jan 28 02:01:29.029000 audit[2194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2112 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:29.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563356263303433343939383461343962643265366361633466653530 Jan 28 02:01:29.029000 audit: BPF prog-id=94 op=LOAD Jan 28 02:01:29.029000 audit[2194]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2112 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:29.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563356263303433343939383461343962643265366361633466653530 Jan 28 02:01:29.029000 audit: BPF prog-id=94 op=UNLOAD Jan 28 02:01:29.029000 audit[2194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:29.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563356263303433343939383461343962643265366361633466653530 Jan 28 02:01:29.029000 audit: BPF prog-id=93 op=UNLOAD Jan 28 02:01:29.029000 audit[2194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:29.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563356263303433343939383461343962643265366361633466653530 Jan 28 02:01:29.029000 audit: BPF prog-id=95 op=LOAD Jan 28 02:01:29.029000 audit[2194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2112 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:29.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563356263303433343939383461343962643265366361633466653530 Jan 28 02:01:29.124686 containerd[1601]: time="2026-01-28T02:01:29.124468252Z" level=info msg="StartContainer for \"5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d\" returns successfully" Jan 28 02:01:29.198598 systemd[1]: cri-containerd-5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d.scope: Deactivated successfully. Jan 28 02:01:29.220508 containerd[1601]: time="2026-01-28T02:01:29.219193662Z" level=info msg="received container exit event container_id:\"5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d\" id:\"5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d\" pid:2206 exited_at:{seconds:1769565689 nanos:212752408}" Jan 28 02:01:29.220000 audit: BPF prog-id=95 op=UNLOAD Jan 28 02:01:29.355451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c5bc04349984a49bd2e6cac4fe505ba94598b437327164716db785d45bade5d-rootfs.mount: Deactivated successfully. Jan 28 02:01:29.492459 kubelet[1960]: E0128 02:01:29.492401 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:29.526019 kubelet[1960]: E0128 02:01:29.525991 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:29.925469 kubelet[1960]: E0128 02:01:29.924410 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:30.494659 kubelet[1960]: E0128 02:01:30.494006 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:31.495416 kubelet[1960]: E0128 02:01:31.495348 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:31.919029 kubelet[1960]: E0128 02:01:31.916757 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:32.291434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686741346.mount: Deactivated successfully. Jan 28 02:01:32.495985 kubelet[1960]: E0128 02:01:32.495796 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:33.497413 kubelet[1960]: E0128 02:01:33.497153 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:33.919116 kubelet[1960]: E0128 02:01:33.916982 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:34.508779 kubelet[1960]: E0128 02:01:34.505532 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:35.510036 kubelet[1960]: E0128 02:01:35.509968 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:35.916151 kubelet[1960]: E0128 02:01:35.915679 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:36.517511 kubelet[1960]: E0128 02:01:36.517455 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:36.722371 containerd[1601]: time="2026-01-28T02:01:36.722010967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:36.727994 containerd[1601]: time="2026-01-28T02:01:36.727057422Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 28 02:01:36.732528 containerd[1601]: time="2026-01-28T02:01:36.732455300Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:36.742012 containerd[1601]: time="2026-01-28T02:01:36.741548606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:01:36.747488 containerd[1601]: time="2026-01-28T02:01:36.744681277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 8.471308768s" Jan 28 02:01:36.747488 containerd[1601]: time="2026-01-28T02:01:36.744779831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 02:01:36.748542 containerd[1601]: time="2026-01-28T02:01:36.748190137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 02:01:36.754387 containerd[1601]: time="2026-01-28T02:01:36.753279351Z" level=info msg="CreateContainer within sandbox \"f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 02:01:36.810591 containerd[1601]: time="2026-01-28T02:01:36.810209098Z" level=info msg="Container 7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:01:36.837236 containerd[1601]: time="2026-01-28T02:01:36.836729593Z" level=info msg="CreateContainer within sandbox \"f3c36eb2a4778c78f61a15b13ba91ef20f7e65dd19c4c16186c0aba4a656bce8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53\"" Jan 28 02:01:36.841968 containerd[1601]: time="2026-01-28T02:01:36.840439709Z" level=info msg="StartContainer for \"7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53\"" Jan 28 02:01:36.841968 containerd[1601]: time="2026-01-28T02:01:36.841792956Z" level=info msg="connecting to shim 7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53" address="unix:///run/containerd/s/ad972c14b38099c7d6e485f01921268b51bcf7cc9b505262c4aab5efb1719502" protocol=ttrpc version=3 Jan 28 02:01:36.982420 systemd[1]: Started cri-containerd-7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53.scope - libcontainer container 7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53. Jan 28 02:01:37.171000 audit: BPF prog-id=96 op=LOAD Jan 28 02:01:37.196044 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 28 02:01:37.207370 kernel: audit: type=1334 audit(1769565697.171:332): prog-id=96 op=LOAD Jan 28 02:01:37.207464 kernel: audit: type=1300 audit(1769565697.171:332): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.171000 audit[2247]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.270462 kernel: audit: type=1327 audit(1769565697.171:332): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.171000 audit: BPF prog-id=97 op=LOAD Jan 28 02:01:37.396107 kernel: audit: type=1334 audit(1769565697.171:333): prog-id=97 op=LOAD Jan 28 02:01:37.171000 audit[2247]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.504093 kernel: audit: type=1300 audit(1769565697.171:333): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.504228 kernel: audit: type=1327 audit(1769565697.171:333): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.519482 kubelet[1960]: E0128 02:01:37.519434 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:37.177000 audit: BPF prog-id=97 op=UNLOAD Jan 28 02:01:37.177000 audit[2247]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.734661 kernel: audit: type=1334 audit(1769565697.177:334): prog-id=97 op=UNLOAD Jan 28 02:01:37.734780 kernel: audit: type=1300 audit(1769565697.177:334): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.761255 containerd[1601]: time="2026-01-28T02:01:37.742603057Z" level=info msg="StartContainer for \"7165d8add18304e0c7aa4d89a87f29fe1f2da5beaba099ad442df507fa0eea53\" returns successfully" Jan 28 02:01:37.177000 audit: BPF prog-id=96 op=UNLOAD Jan 28 02:01:37.815359 kernel: audit: type=1327 audit(1769565697.177:334): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.815498 kernel: audit: type=1334 audit(1769565697.177:335): prog-id=96 op=UNLOAD Jan 28 02:01:37.177000 audit[2247]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.177000 audit: BPF prog-id=98 op=LOAD Jan 28 02:01:37.177000 audit[2247]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2109 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:37.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363564386164643138333034653063376161346438396138376632 Jan 28 02:01:37.916657 kubelet[1960]: E0128 02:01:37.915406 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:38.527050 kubelet[1960]: E0128 02:01:38.526956 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:38.673657 kubelet[1960]: E0128 02:01:38.673610 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:39.074000 audit[2312]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.074000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0b4c1980 a2=0 a3=7ffe0b4c196c items=0 ppid=2260 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 02:01:39.082000 audit[2311]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:39.082000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdeb849430 a2=0 a3=7ffdeb84941c items=0 ppid=2260 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 02:01:39.090000 audit[2314]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:39.090000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe68648a40 a2=0 a3=7ffe68648a2c items=0 ppid=2260 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 02:01:39.096000 audit[2313]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.096000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8ed27980 a2=0 a3=7ffd8ed2796c items=0 ppid=2260 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 02:01:39.113000 audit[2316]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.113000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6a8df7f0 a2=0 a3=7ffe6a8df7dc items=0 ppid=2260 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 02:01:39.116000 audit[2315]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:39.116000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedaedaf30 a2=0 a3=7ffedaedaf1c items=0 ppid=2260 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 02:01:39.193000 audit[2317]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.193000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0f2c19e0 a2=0 a3=7ffd0f2c19cc items=0 ppid=2260 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 02:01:39.226000 audit[2319]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.226000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc25523320 a2=0 a3=7ffc2552330c items=0 ppid=2260 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 02:01:39.363000 audit[2322]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.363000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffd01b4460 a2=0 a3=7fffd01b444c items=0 ppid=2260 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 02:01:39.384000 audit[2323]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.384000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6f9a1470 a2=0 a3=7ffc6f9a145c items=0 ppid=2260 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 02:01:39.412000 audit[2325]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.412000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2f925e30 a2=0 a3=7ffc2f925e1c items=0 ppid=2260 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 02:01:39.423000 audit[2326]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.423000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd9b75960 a2=0 a3=7ffcd9b7594c items=0 ppid=2260 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 02:01:39.508000 audit[2328]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.508000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff21c8af70 a2=0 a3=7fff21c8af5c items=0 ppid=2260 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 02:01:39.529244 kubelet[1960]: E0128 02:01:39.529187 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:39.601000 audit[2333]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.601000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcf2fcd40 a2=0 a3=7fffcf2fcd2c items=0 ppid=2260 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.601000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 02:01:39.628000 audit[2336]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.628000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13627af0 a2=0 a3=7fff13627adc items=0 ppid=2260 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 02:01:39.681000 audit[2338]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.681000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb21b7e60 a2=0 a3=7ffeb21b7e4c items=0 ppid=2260 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 02:01:39.690712 kubelet[1960]: E0128 02:01:39.690685 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:39.694000 audit[2339]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.694000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe59bbe260 a2=0 a3=7ffe59bbe24c items=0 ppid=2260 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 02:01:39.730000 audit[2341]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.730000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff08401510 a2=0 a3=7fff084014fc items=0 ppid=2260 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 02:01:39.800000 audit[2344]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.800000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe17ce3e80 a2=0 a3=7ffe17ce3e6c items=0 ppid=2260 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 02:01:39.853000 audit[2347]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.853000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7cd13a70 a2=0 a3=7ffc7cd13a5c items=0 ppid=2260 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 02:01:39.872000 audit[2348]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.872000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff339fd310 a2=0 a3=7fff339fd2fc items=0 ppid=2260 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 02:01:39.919787 kubelet[1960]: E0128 02:01:39.918306 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:39.923000 audit[2350]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:39.923000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff6b5c7260 a2=0 a3=7fff6b5c724c items=0 ppid=2260 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:39.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 02:01:40.119000 audit[2354]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:40.119000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcf54af880 a2=0 a3=7ffcf54af86c items=0 ppid=2260 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 02:01:40.156000 audit[2355]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:40.156000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7fa91620 a2=0 a3=7ffd7fa9160c items=0 ppid=2260 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 02:01:40.169000 audit[2357]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 02:01:40.169000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffea685b120 a2=0 a3=7ffea685b10c items=0 ppid=2260 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 02:01:40.304000 audit[2363]: NETFILTER_CFG table=filter:39 family=2 entries=11 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:01:40.304000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0a0afcf0 a2=0 a3=7ffe0a0afcdc items=0 ppid=2260 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:01:40.343685 kubelet[1960]: E0128 02:01:40.340075 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:40.474000 audit[2363]: NETFILTER_CFG table=nat:40 family=2 entries=21 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:01:40.474000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=9084 a0=3 a1=7ffe0a0afcf0 a2=0 a3=7ffe0a0afcdc items=0 ppid=2260 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.474000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:01:40.495000 audit[2370]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.495000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe27342dd0 a2=0 a3=7ffe27342dbc items=0 ppid=2260 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 02:01:40.539939 kubelet[1960]: E0128 02:01:40.535523 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:40.560000 audit[2372]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.560000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffea3e502b0 a2=0 a3=7ffea3e5029c items=0 ppid=2260 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 02:01:40.660000 audit[2375]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.660000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc9baab630 a2=0 a3=7ffc9baab61c items=0 ppid=2260 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 02:01:40.680000 audit[2376]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.680000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe69137f80 a2=0 a3=7ffe69137f6c items=0 ppid=2260 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 02:01:40.721000 audit[2378]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.721000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff2d8162b0 a2=0 a3=7fff2d81629c items=0 ppid=2260 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 02:01:40.725000 audit[2379]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.725000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc675651d0 a2=0 a3=7ffc675651bc items=0 ppid=2260 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 02:01:40.788000 audit[2381]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.788000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd99e19fe0 a2=0 a3=7ffd99e19fcc items=0 ppid=2260 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 02:01:40.844000 audit[2384]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.844000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff085f6140 a2=0 a3=7fff085f612c items=0 ppid=2260 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 02:01:40.880000 audit[2385]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.880000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8aaa5920 a2=0 a3=7ffe8aaa590c items=0 ppid=2260 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 02:01:40.912000 audit[2387]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.912000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe2e276e0 a2=0 a3=7fffe2e276cc items=0 ppid=2260 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 02:01:40.957000 audit[2389]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:40.957000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc746941a0 a2=0 a3=7ffc7469418c items=0 ppid=2260 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:40.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 02:01:41.002000 audit[2393]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.002000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0d10f130 a2=0 a3=7ffc0d10f11c items=0 ppid=2260 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 02:01:41.040000 audit[2396]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.040000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd85220dc0 a2=0 a3=7ffd85220dac items=0 ppid=2260 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 02:01:41.127000 audit[2399]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.127000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef39b7c30 a2=0 a3=7ffef39b7c1c items=0 ppid=2260 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 02:01:41.145000 audit[2400]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.145000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6b0208c0 a2=0 a3=7ffd6b0208ac items=0 ppid=2260 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 02:01:41.186000 audit[2402]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.186000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffec1f72d80 a2=0 a3=7ffec1f72d6c items=0 ppid=2260 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 02:01:41.219000 audit[2405]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.219000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc14f12830 a2=0 a3=7ffc14f1281c items=0 ppid=2260 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 02:01:41.239000 audit[2406]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.239000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3ff4bcc0 a2=0 a3=7fff3ff4bcac items=0 ppid=2260 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 02:01:41.273000 audit[2408]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.273000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff061b55d0 a2=0 a3=7fff061b55bc items=0 ppid=2260 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 02:01:41.320000 audit[2409]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.320000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffddd01fa0 a2=0 a3=7fffddd01f8c items=0 ppid=2260 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 02:01:41.347000 audit[2411]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.347000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8529cec0 a2=0 a3=7fff8529ceac items=0 ppid=2260 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 02:01:41.413000 audit[2414]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 02:01:41.413000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc24de5660 a2=0 a3=7ffc24de564c items=0 ppid=2260 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 02:01:41.476000 audit[2416]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 02:01:41.476000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe67216140 a2=0 a3=7ffe6721612c items=0 ppid=2260 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.476000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:01:41.476000 audit[2416]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 02:01:41.476000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe67216140 a2=0 a3=7ffe6721612c items=0 ppid=2260 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:01:41.476000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:01:41.542240 kubelet[1960]: E0128 02:01:41.539402 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:41.921072 kubelet[1960]: E0128 02:01:41.917634 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:42.579048 kubelet[1960]: E0128 02:01:42.577833 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:43.578795 kubelet[1960]: E0128 02:01:43.578601 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:43.917284 kubelet[1960]: E0128 02:01:43.916699 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:44.585433 kubelet[1960]: E0128 02:01:44.580161 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:45.586320 kubelet[1960]: E0128 02:01:45.581116 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:45.925998 kubelet[1960]: E0128 02:01:45.917287 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:46.586548 kubelet[1960]: E0128 02:01:46.586323 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:47.591580 kubelet[1960]: E0128 02:01:47.591277 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:47.918289 kubelet[1960]: E0128 02:01:47.917611 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:48.599543 kubelet[1960]: E0128 02:01:48.597421 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:49.599662 kubelet[1960]: E0128 02:01:49.599511 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:49.931061 kubelet[1960]: E0128 02:01:49.928469 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:50.602614 kubelet[1960]: E0128 02:01:50.602417 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:51.604088 kubelet[1960]: E0128 02:01:51.603735 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:51.892280 kubelet[1960]: E0128 02:01:51.891290 1960 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T02:01:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T02:01:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T02:01:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T02:01:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\\\",\\\"registry.k8s.io/kube-proxy:v1.32.11\\\"],\\\"sizeBytes\\\":31160918},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\\\"],\\\"sizeBytes\\\":5941314},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"10.0.0.114\": Patch \"https://10.0.0.105:6443/api/v1/nodes/10.0.0.114/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 02:01:51.920351 kubelet[1960]: E0128 02:01:51.916592 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:52.606234 kubelet[1960]: E0128 02:01:52.606086 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:53.607427 kubelet[1960]: E0128 02:01:53.607285 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:53.918084 kubelet[1960]: E0128 02:01:53.916737 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:54.614524 kubelet[1960]: E0128 02:01:54.610325 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:55.612609 kubelet[1960]: E0128 02:01:55.611693 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:55.934729 kubelet[1960]: E0128 02:01:55.930577 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:56.638094 kubelet[1960]: E0128 02:01:56.629077 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:57.646210 kubelet[1960]: E0128 02:01:57.646042 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:57.916505 kubelet[1960]: E0128 02:01:57.915411 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:58.661293 kubelet[1960]: E0128 02:01:58.661100 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:01:58.931271 kubelet[1960]: E0128 02:01:58.921395 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:01:59.679309 kubelet[1960]: E0128 02:01:59.674489 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:00.340719 kubelet[1960]: E0128 02:02:00.340466 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:00.680383 kubelet[1960]: E0128 02:02:00.678395 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:00.979683 kubelet[1960]: E0128 02:02:00.973341 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:01.683044 kubelet[1960]: E0128 02:02:01.680834 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:01.746245 containerd[1601]: time="2026-01-28T02:02:01.745824482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:02:01.762411 containerd[1601]: time="2026-01-28T02:02:01.762348438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Jan 28 02:02:01.767366 containerd[1601]: time="2026-01-28T02:02:01.766695536Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:02:01.786574 containerd[1601]: time="2026-01-28T02:02:01.786418301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:02:01.795802 containerd[1601]: time="2026-01-28T02:02:01.790309979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 25.042025337s" Jan 28 02:02:01.795802 containerd[1601]: time="2026-01-28T02:02:01.790350552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 02:02:01.877946 containerd[1601]: time="2026-01-28T02:02:01.868003302Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 02:02:02.027680 containerd[1601]: time="2026-01-28T02:02:02.024959503Z" level=info msg="Container 678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:02:02.087222 containerd[1601]: time="2026-01-28T02:02:02.087085253Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d\"" Jan 28 02:02:02.099251 containerd[1601]: time="2026-01-28T02:02:02.099022891Z" level=info msg="StartContainer for \"678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d\"" Jan 28 02:02:02.129237 containerd[1601]: time="2026-01-28T02:02:02.125414660Z" level=info msg="connecting to shim 678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d" address="unix:///run/containerd/s/a207b2290cacd6be4ade278567ccf17e2980d2afc34fbeb68cfeb5596dd10f31" protocol=ttrpc version=3 Jan 28 02:02:02.611058 systemd[1]: Started cri-containerd-678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d.scope - libcontainer container 678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d. Jan 28 02:02:02.683783 kubelet[1960]: E0128 02:02:02.683711 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:03.001577 kubelet[1960]: E0128 02:02:03.001500 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:03.298000 audit: BPF prog-id=99 op=LOAD Jan 28 02:02:03.318099 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 28 02:02:03.319624 kernel: audit: type=1334 audit(1769565723.298:388): prog-id=99 op=LOAD Jan 28 02:02:03.298000 audit[2422]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.432204 kernel: audit: type=1300 audit(1769565723.298:388): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.432999 kernel: audit: type=1327 audit(1769565723.298:388): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: BPF prog-id=100 op=LOAD Jan 28 02:02:03.531812 kernel: audit: type=1334 audit(1769565723.298:389): prog-id=100 op=LOAD Jan 28 02:02:03.298000 audit[2422]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.638517 kernel: audit: type=1300 audit(1769565723.298:389): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.639287 kernel: audit: type=1327 audit(1769565723.298:389): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: BPF prog-id=100 op=UNLOAD Jan 28 02:02:03.690013 kernel: audit: type=1334 audit(1769565723.298:390): prog-id=100 op=UNLOAD Jan 28 02:02:03.696977 kubelet[1960]: E0128 02:02:03.689348 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:03.298000 audit[2422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.823610 kernel: audit: type=1300 audit(1769565723.298:390): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.828151 kernel: audit: type=1327 audit(1769565723.298:390): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: BPF prog-id=99 op=UNLOAD Jan 28 02:02:03.933774 kernel: audit: type=1334 audit(1769565723.298:391): prog-id=99 op=UNLOAD Jan 28 02:02:03.298000 audit[2422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:03.298000 audit: BPF prog-id=101 op=LOAD Jan 28 02:02:03.298000 audit[2422]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2112 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:02:03.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637386265623130393264336337313937323666656161303839306534 Jan 28 02:02:04.202601 containerd[1601]: time="2026-01-28T02:02:04.197587361Z" level=info msg="StartContainer for \"678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d\" returns successfully" Jan 28 02:02:04.708498 kubelet[1960]: E0128 02:02:04.703210 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:04.918649 kubelet[1960]: E0128 02:02:04.917125 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:05.397596 kubelet[1960]: E0128 02:02:05.397101 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:05.539062 kubelet[1960]: I0128 02:02:05.538549 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rbscm" podStartSLOduration=43.840467535 podStartE2EDuration="54.538530995s" podCreationTimestamp="2026-01-28 02:01:11 +0000 UTC" firstStartedPulling="2026-01-28 02:01:26.049208927 +0000 UTC m=+47.060920633" lastFinishedPulling="2026-01-28 02:01:36.747272387 +0000 UTC m=+57.758984093" observedRunningTime="2026-01-28 02:01:38.791994434 +0000 UTC m=+59.803706170" watchObservedRunningTime="2026-01-28 02:02:05.538530995 +0000 UTC m=+86.550242701" Jan 28 02:02:05.704497 kubelet[1960]: E0128 02:02:05.703697 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:06.422691 kubelet[1960]: E0128 02:02:06.422509 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:06.715834 kubelet[1960]: E0128 02:02:06.715648 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:06.935801 kubelet[1960]: E0128 02:02:06.932272 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:07.720172 kubelet[1960]: E0128 02:02:07.719980 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:08.722167 kubelet[1960]: E0128 02:02:08.721428 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:08.921254 kubelet[1960]: E0128 02:02:08.920518 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:09.727226 kubelet[1960]: E0128 02:02:09.724372 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:10.729985 kubelet[1960]: E0128 02:02:10.724754 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:10.919316 kubelet[1960]: E0128 02:02:10.916126 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:11.733613 kubelet[1960]: E0128 02:02:11.733173 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:12.735379 kubelet[1960]: E0128 02:02:12.733527 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:12.926335 kubelet[1960]: E0128 02:02:12.925465 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:13.739652 kubelet[1960]: E0128 02:02:13.738559 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:14.739670 kubelet[1960]: E0128 02:02:14.739378 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:14.967146 kubelet[1960]: E0128 02:02:14.966660 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:15.744406 kubelet[1960]: E0128 02:02:15.743321 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:16.747039 kubelet[1960]: E0128 02:02:16.746604 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:16.917673 kubelet[1960]: E0128 02:02:16.917546 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:17.490050 systemd[1]: cri-containerd-678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d.scope: Deactivated successfully. Jan 28 02:02:17.490637 systemd[1]: cri-containerd-678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d.scope: Consumed 8.813s CPU time, 197.3M memory peak, 171.3M written to disk. Jan 28 02:02:17.519000 audit: BPF prog-id=101 op=UNLOAD Jan 28 02:02:17.537615 kubelet[1960]: I0128 02:02:17.525302 1960 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 02:02:17.544165 containerd[1601]: time="2026-01-28T02:02:17.538336821Z" level=info msg="received container exit event container_id:\"678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d\" id:\"678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d\" pid:2435 exited_at:{seconds:1769565737 nanos:537106255}" Jan 28 02:02:17.556200 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 02:02:17.566758 kernel: audit: type=1334 audit(1769565737.519:393): prog-id=101 op=UNLOAD Jan 28 02:02:17.749206 kubelet[1960]: E0128 02:02:17.747752 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:17.871144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-678beb1092d3c719726feaa0890e4d8176d7b2cec915e6d0a9bbfc3f4b2f7b3d-rootfs.mount: Deactivated successfully. Jan 28 02:02:18.638508 kubelet[1960]: E0128 02:02:18.637505 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:18.639166 containerd[1601]: time="2026-01-28T02:02:18.639124669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 02:02:18.750319 kubelet[1960]: E0128 02:02:18.749122 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:18.996780 systemd[1]: Created slice kubepods-besteffort-pod15b582de_4a9d_49bf_b8af_da9b7c0dc36f.slice - libcontainer container kubepods-besteffort-pod15b582de_4a9d_49bf_b8af_da9b7c0dc36f.slice. Jan 28 02:02:19.039423 containerd[1601]: time="2026-01-28T02:02:19.038482665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:19.765828 kubelet[1960]: E0128 02:02:19.762106 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:19.882038 containerd[1601]: time="2026-01-28T02:02:19.880817001Z" level=error msg="Failed to destroy network for sandbox \"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:19.890131 systemd[1]: run-netns-cni\x2de6ed9093\x2dd259\x2de254\x2d5887\x2d791dceb31015.mount: Deactivated successfully. Jan 28 02:02:19.949020 containerd[1601]: time="2026-01-28T02:02:19.942960792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:19.949943 kubelet[1960]: E0128 02:02:19.948629 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:19.949943 kubelet[1960]: E0128 02:02:19.948745 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:19.949943 kubelet[1960]: E0128 02:02:19.948787 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:19.950120 kubelet[1960]: E0128 02:02:19.948989 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8090380d49994cc2ba6c0322b7a7eb4a4f32bc9ff6de6cc0d45ca9c26fb853c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:20.359047 kubelet[1960]: E0128 02:02:20.343046 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:20.766043 kubelet[1960]: E0128 02:02:20.764372 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:21.766519 kubelet[1960]: E0128 02:02:21.766084 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:22.785368 kubelet[1960]: E0128 02:02:22.771233 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:23.777558 kubelet[1960]: E0128 02:02:23.776362 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:24.787691 kubelet[1960]: E0128 02:02:24.787523 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:25.799667 kubelet[1960]: E0128 02:02:25.799114 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:26.801079 kubelet[1960]: E0128 02:02:26.800412 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:27.806638 kubelet[1960]: E0128 02:02:27.806074 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:28.819619 kubelet[1960]: E0128 02:02:28.818312 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:29.820813 kubelet[1960]: E0128 02:02:29.820650 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:30.823263 kubelet[1960]: E0128 02:02:30.823085 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:31.827506 kubelet[1960]: E0128 02:02:31.827262 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:32.828344 kubelet[1960]: E0128 02:02:32.827738 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:33.829961 kubelet[1960]: E0128 02:02:33.829680 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:33.929040 containerd[1601]: time="2026-01-28T02:02:33.924975684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:34.842650 kubelet[1960]: E0128 02:02:34.842390 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:35.840450 containerd[1601]: time="2026-01-28T02:02:35.840168297Z" level=error msg="Failed to destroy network for sandbox \"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:35.847276 systemd[1]: run-netns-cni\x2d088dd497\x2dcdfb\x2d3691\x2dd34d\x2d30ca60b49ed2.mount: Deactivated successfully. Jan 28 02:02:35.856026 kubelet[1960]: E0128 02:02:35.849621 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:35.885003 containerd[1601]: time="2026-01-28T02:02:35.884692655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:35.885266 kubelet[1960]: E0128 02:02:35.885103 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:35.885266 kubelet[1960]: E0128 02:02:35.885174 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:35.885266 kubelet[1960]: E0128 02:02:35.885203 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:35.885419 kubelet[1960]: E0128 02:02:35.885257 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ae99bcf9502e8bdfc81d709332d78de5c6f174664d71887afba52d3d9cee479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:36.861463 kubelet[1960]: E0128 02:02:36.856508 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:37.861735 kubelet[1960]: E0128 02:02:37.860663 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:38.868434 kubelet[1960]: E0128 02:02:38.866100 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:39.884944 kubelet[1960]: E0128 02:02:39.871461 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:40.343931 kubelet[1960]: E0128 02:02:40.340639 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:40.873183 kubelet[1960]: E0128 02:02:40.873142 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:41.140828 systemd[1]: Created slice kubepods-besteffort-pod9a7cf4fa_e7b1_45e3_92d2_5754fd7693cc.slice - libcontainer container kubepods-besteffort-pod9a7cf4fa_e7b1_45e3_92d2_5754fd7693cc.slice. Jan 28 02:02:41.158283 kubelet[1960]: I0128 02:02:41.157189 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc-tigera-ca-bundle\") pod \"calico-kube-controllers-78fc6b544-rfcfq\" (UID: \"9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc\") " pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:41.158283 kubelet[1960]: I0128 02:02:41.157237 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fnl\" (UniqueName: \"kubernetes.io/projected/9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc-kube-api-access-s4fnl\") pod \"calico-kube-controllers-78fc6b544-rfcfq\" (UID: \"9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc\") " pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:41.244508 systemd[1]: Created slice kubepods-burstable-pod1f7a7a51_f1ca_4889_bd7c_61ed908ad5f6.slice - libcontainer container kubepods-burstable-pod1f7a7a51_f1ca_4889_bd7c_61ed908ad5f6.slice. Jan 28 02:02:41.362938 kubelet[1960]: I0128 02:02:41.360785 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6-config-volume\") pod \"coredns-668d6bf9bc-t45sz\" (UID: \"1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6\") " pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:02:41.362938 kubelet[1960]: I0128 02:02:41.360834 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-ca-bundle\") pod \"whisker-5d657c99cd-hnvlr\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " pod="calico-system/whisker-5d657c99cd-hnvlr" Jan 28 02:02:41.362938 kubelet[1960]: I0128 02:02:41.361218 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67521aee-68dc-4703-af3e-6a8c6df60cd8-calico-apiserver-certs\") pod \"calico-apiserver-6656f8f9d9-spnd9\" (UID: \"67521aee-68dc-4703-af3e-6a8c6df60cd8\") " pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:02:41.362938 kubelet[1960]: I0128 02:02:41.361250 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4b6fba0-f381-4858-a71c-ba2619256e7e-goldmane-ca-bundle\") pod \"goldmane-666569f655-5zdgq\" (UID: \"f4b6fba0-f381-4858-a71c-ba2619256e7e\") " pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:41.362938 kubelet[1960]: I0128 02:02:41.361271 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f4b6fba0-f381-4858-a71c-ba2619256e7e-goldmane-key-pair\") pod \"goldmane-666569f655-5zdgq\" (UID: \"f4b6fba0-f381-4858-a71c-ba2619256e7e\") " pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:41.363213 kubelet[1960]: I0128 02:02:41.361291 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slzz8\" (UniqueName: \"kubernetes.io/projected/1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6-kube-api-access-slzz8\") pod \"coredns-668d6bf9bc-t45sz\" (UID: \"1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6\") " pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:02:41.363213 kubelet[1960]: I0128 02:02:41.361320 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a2efbc6-3a74-40a5-b192-41e159a7237c-calico-apiserver-certs\") pod \"calico-apiserver-6656f8f9d9-6mpkc\" (UID: \"5a2efbc6-3a74-40a5-b192-41e159a7237c\") " pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:02:41.363213 kubelet[1960]: I0128 02:02:41.361345 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xql9\" (UniqueName: \"kubernetes.io/projected/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-kube-api-access-8xql9\") pod \"whisker-5d657c99cd-hnvlr\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " pod="calico-system/whisker-5d657c99cd-hnvlr" Jan 28 02:02:41.363213 kubelet[1960]: I0128 02:02:41.361371 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b6fba0-f381-4858-a71c-ba2619256e7e-config\") pod \"goldmane-666569f655-5zdgq\" (UID: \"f4b6fba0-f381-4858-a71c-ba2619256e7e\") " pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:41.363213 kubelet[1960]: I0128 02:02:41.361391 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3eaa438b-c98e-4a63-b138-6192c658da00-config-volume\") pod \"coredns-668d6bf9bc-zwxm9\" (UID: \"3eaa438b-c98e-4a63-b138-6192c658da00\") " pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:02:41.363389 kubelet[1960]: I0128 02:02:41.361418 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jld9p\" (UniqueName: \"kubernetes.io/projected/f4b6fba0-f381-4858-a71c-ba2619256e7e-kube-api-access-jld9p\") pod \"goldmane-666569f655-5zdgq\" (UID: \"f4b6fba0-f381-4858-a71c-ba2619256e7e\") " pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:41.363389 kubelet[1960]: I0128 02:02:41.361444 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nln7r\" (UniqueName: \"kubernetes.io/projected/5a2efbc6-3a74-40a5-b192-41e159a7237c-kube-api-access-nln7r\") pod \"calico-apiserver-6656f8f9d9-6mpkc\" (UID: \"5a2efbc6-3a74-40a5-b192-41e159a7237c\") " pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:02:41.363389 kubelet[1960]: I0128 02:02:41.361464 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d497h\" (UniqueName: \"kubernetes.io/projected/67521aee-68dc-4703-af3e-6a8c6df60cd8-kube-api-access-d497h\") pod \"calico-apiserver-6656f8f9d9-spnd9\" (UID: \"67521aee-68dc-4703-af3e-6a8c6df60cd8\") " pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:02:41.363389 kubelet[1960]: I0128 02:02:41.361487 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-backend-key-pair\") pod \"whisker-5d657c99cd-hnvlr\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " pod="calico-system/whisker-5d657c99cd-hnvlr" Jan 28 02:02:41.363389 kubelet[1960]: I0128 02:02:41.361514 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjz7x\" (UniqueName: \"kubernetes.io/projected/3eaa438b-c98e-4a63-b138-6192c658da00-kube-api-access-mjz7x\") pod \"coredns-668d6bf9bc-zwxm9\" (UID: \"3eaa438b-c98e-4a63-b138-6192c658da00\") " pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:02:41.404544 systemd[1]: Created slice kubepods-besteffort-pod5a2efbc6_3a74_40a5_b192_41e159a7237c.slice - libcontainer container kubepods-besteffort-pod5a2efbc6_3a74_40a5_b192_41e159a7237c.slice. Jan 28 02:02:41.463502 kubelet[1960]: I0128 02:02:41.463232 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl82c\" (UniqueName: \"kubernetes.io/projected/36471742-e8b3-41d5-8572-474eef077778-kube-api-access-fl82c\") pod \"nginx-deployment-7fcdb87857-c7z7j\" (UID: \"36471742-e8b3-41d5-8572-474eef077778\") " pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:02:41.489106 containerd[1601]: time="2026-01-28T02:02:41.485607816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:42.582832 kubelet[1960]: E0128 02:02:42.570479 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:42.766660 systemd[1]: Created slice kubepods-besteffort-pod07489a4c_3aa2_4f2e_8d83_fc6d8034089f.slice - libcontainer container kubepods-besteffort-pod07489a4c_3aa2_4f2e_8d83_fc6d8034089f.slice. Jan 28 02:02:42.931664 systemd[1]: Created slice kubepods-besteffort-pod67521aee_68dc_4703_af3e_6a8c6df60cd8.slice - libcontainer container kubepods-besteffort-pod67521aee_68dc_4703_af3e_6a8c6df60cd8.slice. Jan 28 02:02:42.979629 containerd[1601]: time="2026-01-28T02:02:42.979272601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:02:42.995390 systemd[1]: Created slice kubepods-besteffort-podf4b6fba0_f381_4858_a71c_ba2619256e7e.slice - libcontainer container kubepods-besteffort-podf4b6fba0_f381_4858_a71c_ba2619256e7e.slice. Jan 28 02:02:43.028164 containerd[1601]: time="2026-01-28T02:02:43.027978380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:43.144617 systemd[1]: Created slice kubepods-burstable-pod3eaa438b_c98e_4a63_b138_6192c658da00.slice - libcontainer container kubepods-burstable-pod3eaa438b_c98e_4a63_b138_6192c658da00.slice. Jan 28 02:02:43.177185 kubelet[1960]: E0128 02:02:43.176538 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:43.212332 containerd[1601]: time="2026-01-28T02:02:43.212063615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d657c99cd-hnvlr,Uid:07489a4c-3aa2-4f2e-8d83-fc6d8034089f,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:43.222650 containerd[1601]: time="2026-01-28T02:02:43.221156504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,}" Jan 28 02:02:43.236290 kubelet[1960]: E0128 02:02:43.234446 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:43.264010 containerd[1601]: time="2026-01-28T02:02:43.261110562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,}" Jan 28 02:02:43.264010 containerd[1601]: time="2026-01-28T02:02:43.261529117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:02:43.347307 systemd[1]: Created slice kubepods-besteffort-pod36471742_e8b3_41d5_8572_474eef077778.slice - libcontainer container kubepods-besteffort-pod36471742_e8b3_41d5_8572_474eef077778.slice. Jan 28 02:02:43.477663 containerd[1601]: time="2026-01-28T02:02:43.475144711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,}" Jan 28 02:02:43.574249 kubelet[1960]: E0128 02:02:43.572767 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:44.575408 kubelet[1960]: E0128 02:02:44.575256 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:45.577976 kubelet[1960]: E0128 02:02:45.577639 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:46.640223 kubelet[1960]: E0128 02:02:46.580605 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:47.016544 containerd[1601]: time="2026-01-28T02:02:47.001293285Z" level=error msg="Failed to destroy network for sandbox \"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.004473 systemd[1]: run-netns-cni\x2dbaec66ec\x2db985\x2d8a94\x2d140c\x2d074eabfa8894.mount: Deactivated successfully. Jan 28 02:02:47.049361 containerd[1601]: time="2026-01-28T02:02:47.034612971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.049361 containerd[1601]: time="2026-01-28T02:02:47.049032073Z" level=error msg="Failed to destroy network for sandbox \"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.049611 kubelet[1960]: E0128 02:02:47.035090 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.049611 kubelet[1960]: E0128 02:02:47.035166 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:47.049611 kubelet[1960]: E0128 02:02:47.035194 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:47.051757 kubelet[1960]: E0128 02:02:47.035244 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e84724adbc06a296eabba0a63523319af22297bc0480a989e6e035a9db99e7f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:02:47.075541 containerd[1601]: time="2026-01-28T02:02:47.071208679Z" level=error msg="Failed to destroy network for sandbox \"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.075541 containerd[1601]: time="2026-01-28T02:02:47.072756032Z" level=error msg="Failed to destroy network for sandbox \"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.074612 systemd[1]: run-netns-cni\x2dac907b60\x2d476f\x2d1121\x2d0f6e\x2d2022c03438ea.mount: Deactivated successfully. Jan 28 02:02:47.119490 systemd[1]: run-netns-cni\x2d3640ccad\x2d8827\x2d47cc\x2dc0fe\x2d3d5fca0d2033.mount: Deactivated successfully. Jan 28 02:02:47.121115 systemd[1]: run-netns-cni\x2dc97ee6d4\x2d0e9a\x2de6f2\x2dc327\x2d08c91b7ea6d3.mount: Deactivated successfully. Jan 28 02:02:47.230361 containerd[1601]: time="2026-01-28T02:02:47.227807329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.242346 kubelet[1960]: E0128 02:02:47.238462 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.242346 kubelet[1960]: E0128 02:02:47.238552 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:02:47.242346 kubelet[1960]: E0128 02:02:47.238600 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:02:47.242568 kubelet[1960]: E0128 02:02:47.238665 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb1da92df180ed15b9ecaf044d4228287b0c93c388f48710006b13614acfb355\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-c7z7j" podUID="36471742-e8b3-41d5-8572-474eef077778" Jan 28 02:02:47.298210 containerd[1601]: time="2026-01-28T02:02:47.295327809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.302257 containerd[1601]: time="2026-01-28T02:02:47.302216682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.310246 kubelet[1960]: E0128 02:02:47.310081 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.310454 kubelet[1960]: E0128 02:02:47.310236 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:47.310454 kubelet[1960]: E0128 02:02:47.310344 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:02:47.310454 kubelet[1960]: E0128 02:02:47.310416 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17521ca276693b982720c5d07f93f94c874ceeffd5a108076b938d568019987c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:02:47.310792 kubelet[1960]: E0128 02:02:47.310472 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.310792 kubelet[1960]: E0128 02:02:47.310499 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:02:47.310792 kubelet[1960]: E0128 02:02:47.310520 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:02:47.311128 kubelet[1960]: E0128 02:02:47.310551 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21463cd14babb61864533d7388a0c90320cedf7971decd947749cff259def82e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:02:47.364661 containerd[1601]: time="2026-01-28T02:02:47.364498482Z" level=error msg="Failed to destroy network for sandbox \"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.403632 systemd[1]: run-netns-cni\x2dce1e3897\x2d3b39\x2d031e\x2d7e5b\x2deea4ad74799a.mount: Deactivated successfully. Jan 28 02:02:47.523594 containerd[1601]: time="2026-01-28T02:02:47.520488557Z" level=error msg="Failed to destroy network for sandbox \"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.528630 systemd[1]: run-netns-cni\x2d7800f93b\x2daadb\x2d608b\x2dcca8\x2d97a958ef795b.mount: Deactivated successfully. Jan 28 02:02:47.561326 containerd[1601]: time="2026-01-28T02:02:47.546646628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.576493 kubelet[1960]: E0128 02:02:47.567602 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.576493 kubelet[1960]: E0128 02:02:47.573008 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:02:47.576493 kubelet[1960]: E0128 02:02:47.573486 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:02:47.576696 kubelet[1960]: E0128 02:02:47.573564 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4646fdba69db411305fcef1fd0e3354d49115caa7ebadc515e64e9e05f9b655\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxm9" podUID="3eaa438b-c98e-4a63-b138-6192c658da00" Jan 28 02:02:47.584277 kubelet[1960]: E0128 02:02:47.583621 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:47.595760 containerd[1601]: time="2026-01-28T02:02:47.594801282Z" level=error msg="Failed to destroy network for sandbox \"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.613310 containerd[1601]: time="2026-01-28T02:02:47.613182431Z" level=error msg="Failed to destroy network for sandbox \"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.619211 containerd[1601]: time="2026-01-28T02:02:47.616695817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.624192 kubelet[1960]: E0128 02:02:47.619831 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.624192 kubelet[1960]: E0128 02:02:47.622526 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:02:47.624192 kubelet[1960]: E0128 02:02:47.622556 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:02:47.624522 kubelet[1960]: E0128 02:02:47.622621 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e755d5a01eff8e36df6c42db4449685dadc79c2e79a0b1789188562746563993\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:02:47.642656 containerd[1601]: time="2026-01-28T02:02:47.637522663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.642656 containerd[1601]: time="2026-01-28T02:02:47.642376865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d657c99cd-hnvlr,Uid:07489a4c-3aa2-4f2e-8d83-fc6d8034089f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.643328 kubelet[1960]: E0128 02:02:47.638383 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.643328 kubelet[1960]: E0128 02:02:47.638441 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:02:47.643328 kubelet[1960]: E0128 02:02:47.638468 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:02:47.643775 kubelet[1960]: E0128 02:02:47.638593 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d58d25256cbbeb0c09ea018bb16f84074fc76fb431d0bb096303b51e3bc019e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t45sz" podUID="1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6" Jan 28 02:02:47.647497 kubelet[1960]: E0128 02:02:47.646357 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:47.647497 kubelet[1960]: E0128 02:02:47.646465 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d657c99cd-hnvlr" Jan 28 02:02:47.647497 kubelet[1960]: E0128 02:02:47.646494 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d657c99cd-hnvlr" Jan 28 02:02:47.647667 kubelet[1960]: E0128 02:02:47.646540 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d657c99cd-hnvlr_calico-system(07489a4c-3aa2-4f2e-8d83-fc6d8034089f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d657c99cd-hnvlr_calico-system(07489a4c-3aa2-4f2e-8d83-fc6d8034089f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6490b3fa885696e7967c2910da2fcc5b6c99783544c6d2f9e0a371b629d4aa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d657c99cd-hnvlr" podUID="07489a4c-3aa2-4f2e-8d83-fc6d8034089f" Jan 28 02:02:48.023641 systemd[1]: run-netns-cni\x2d4301fbc1\x2d2b20\x2de7e9\x2d4d1b\x2deb494d88a4ec.mount: Deactivated successfully. Jan 28 02:02:48.032823 systemd[1]: run-netns-cni\x2d2c44e3a5\x2d09e8\x2da2b3\x2d79a3\x2df7c6db3077bf.mount: Deactivated successfully. Jan 28 02:02:48.304445 kubelet[1960]: I0128 02:02:48.301153 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xql9\" (UniqueName: \"kubernetes.io/projected/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-kube-api-access-8xql9\") pod \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " Jan 28 02:02:48.304445 kubelet[1960]: I0128 02:02:48.301236 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-ca-bundle\") pod \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " Jan 28 02:02:48.304445 kubelet[1960]: I0128 02:02:48.301264 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-backend-key-pair\") pod \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\" (UID: \"07489a4c-3aa2-4f2e-8d83-fc6d8034089f\") " Jan 28 02:02:48.304445 kubelet[1960]: I0128 02:02:48.303664 1960 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "07489a4c-3aa2-4f2e-8d83-fc6d8034089f" (UID: "07489a4c-3aa2-4f2e-8d83-fc6d8034089f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 02:02:48.340752 systemd[1]: var-lib-kubelet-pods-07489a4c\x2d3aa2\x2d4f2e\x2d8d83\x2dfc6d8034089f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 02:02:48.378617 kubelet[1960]: I0128 02:02:48.372088 1960 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "07489a4c-3aa2-4f2e-8d83-fc6d8034089f" (UID: "07489a4c-3aa2-4f2e-8d83-fc6d8034089f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 02:02:48.392117 kubelet[1960]: I0128 02:02:48.387958 1960 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-kube-api-access-8xql9" (OuterVolumeSpecName: "kube-api-access-8xql9") pod "07489a4c-3aa2-4f2e-8d83-fc6d8034089f" (UID: "07489a4c-3aa2-4f2e-8d83-fc6d8034089f"). InnerVolumeSpecName "kube-api-access-8xql9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:02:48.404720 systemd[1]: var-lib-kubelet-pods-07489a4c\x2d3aa2\x2d4f2e\x2d8d83\x2dfc6d8034089f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xql9.mount: Deactivated successfully. Jan 28 02:02:48.410345 kubelet[1960]: I0128 02:02:48.408103 1960 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xql9\" (UniqueName: \"kubernetes.io/projected/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-kube-api-access-8xql9\") on node \"10.0.0.114\" DevicePath \"\"" Jan 28 02:02:48.410345 kubelet[1960]: I0128 02:02:48.408137 1960 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-ca-bundle\") on node \"10.0.0.114\" DevicePath \"\"" Jan 28 02:02:48.410345 kubelet[1960]: I0128 02:02:48.408150 1960 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07489a4c-3aa2-4f2e-8d83-fc6d8034089f-whisker-backend-key-pair\") on node \"10.0.0.114\" DevicePath \"\"" Jan 28 02:02:48.601289 kubelet[1960]: E0128 02:02:48.596418 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:49.025160 systemd[1]: Removed slice kubepods-besteffort-pod07489a4c_3aa2_4f2e_8d83_fc6d8034089f.slice - libcontainer container kubepods-besteffort-pod07489a4c_3aa2_4f2e_8d83_fc6d8034089f.slice. Jan 28 02:02:49.600262 kubelet[1960]: E0128 02:02:49.600123 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:50.181682 systemd[1]: Created slice kubepods-besteffort-pod9ae7cefc_65b0_4fcd_9083_f9b1fd7f5a6f.slice - libcontainer container kubepods-besteffort-pod9ae7cefc_65b0_4fcd_9083_f9b1fd7f5a6f.slice. Jan 28 02:02:50.204413 kubelet[1960]: I0128 02:02:50.204369 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f-whisker-backend-key-pair\") pod \"whisker-54df6f8c4d-bq29n\" (UID: \"9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f\") " pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:02:50.204636 kubelet[1960]: I0128 02:02:50.204614 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f-whisker-ca-bundle\") pod \"whisker-54df6f8c4d-bq29n\" (UID: \"9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f\") " pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:02:50.204826 kubelet[1960]: I0128 02:02:50.204806 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hcnp\" (UniqueName: \"kubernetes.io/projected/9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f-kube-api-access-6hcnp\") pod \"whisker-54df6f8c4d-bq29n\" (UID: \"9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f\") " pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:02:50.613127 kubelet[1960]: E0128 02:02:50.612769 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:50.876758 containerd[1601]: time="2026-01-28T02:02:50.842565134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:50.932408 containerd[1601]: time="2026-01-28T02:02:50.929438294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:50.941682 kubelet[1960]: I0128 02:02:50.941640 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07489a4c-3aa2-4f2e-8d83-fc6d8034089f" path="/var/lib/kubelet/pods/07489a4c-3aa2-4f2e-8d83-fc6d8034089f/volumes" Jan 28 02:02:51.613393 kubelet[1960]: E0128 02:02:51.613198 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:51.618348 containerd[1601]: time="2026-01-28T02:02:51.618296033Z" level=error msg="Failed to destroy network for sandbox \"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.621626 systemd[1]: run-netns-cni\x2d1a93dbd7\x2df563\x2d9199\x2d2727\x2d6060a254d792.mount: Deactivated successfully. Jan 28 02:02:51.630619 containerd[1601]: time="2026-01-28T02:02:51.630572597Z" level=error msg="Failed to destroy network for sandbox \"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.645224 systemd[1]: run-netns-cni\x2d59c3a140\x2d7f80\x2d2a71\x2dd2f3\x2d1fb3c2d4b325.mount: Deactivated successfully. Jan 28 02:02:51.662210 containerd[1601]: time="2026-01-28T02:02:51.662091383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.662686 kubelet[1960]: E0128 02:02:51.662642 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.663259 kubelet[1960]: E0128 02:02:51.662961 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:51.663259 kubelet[1960]: E0128 02:02:51.662999 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:02:51.663539 kubelet[1960]: E0128 02:02:51.663422 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4b3435ace38f75ee85f541a4a3003ba20342a3ae5c87799297f047108d0a34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:02:51.679533 containerd[1601]: time="2026-01-28T02:02:51.679407998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.685231 kubelet[1960]: E0128 02:02:51.684716 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:51.686299 kubelet[1960]: E0128 02:02:51.685733 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:02:51.686999 kubelet[1960]: E0128 02:02:51.686804 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:02:51.705564 kubelet[1960]: E0128 02:02:51.687835 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6efdafe139e9074a2945fb50b2c723dac7640c63ed343b08e668f025e5bea02a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:02:52.621351 kubelet[1960]: E0128 02:02:52.615424 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:53.622550 kubelet[1960]: E0128 02:02:53.621523 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:54.650676 kubelet[1960]: E0128 02:02:54.641127 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:55.660173 kubelet[1960]: E0128 02:02:55.659999 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:56.668691 kubelet[1960]: E0128 02:02:56.664225 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:57.668068 kubelet[1960]: E0128 02:02:57.667779 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:57.929519 containerd[1601]: time="2026-01-28T02:02:57.923568916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,}" Jan 28 02:02:58.605553 containerd[1601]: time="2026-01-28T02:02:58.605023818Z" level=error msg="Failed to destroy network for sandbox \"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:58.614365 systemd[1]: run-netns-cni\x2dc56c3f45\x2d8db3\x2d6c34\x2d5166\x2d41384eafaabf.mount: Deactivated successfully. Jan 28 02:02:58.628576 containerd[1601]: time="2026-01-28T02:02:58.628376995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:58.635933 kubelet[1960]: E0128 02:02:58.629346 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:02:58.635933 kubelet[1960]: E0128 02:02:58.629476 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:58.635933 kubelet[1960]: E0128 02:02:58.629511 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:02:58.636481 kubelet[1960]: E0128 02:02:58.629623 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca728fb7446f02dae1711a6558a568fea58b0662cb211cbfc2e9a8ddb94482c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:02:58.677786 kubelet[1960]: E0128 02:02:58.677204 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:59.682207 kubelet[1960]: E0128 02:02:59.681979 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:02:59.974312 containerd[1601]: time="2026-01-28T02:02:59.926044067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:02:59.987645 containerd[1601]: time="2026-01-28T02:02:59.986313291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:00.175266 kubelet[1960]: E0128 02:03:00.171335 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:00.183448 containerd[1601]: time="2026-01-28T02:03:00.181254896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:00.530446 kubelet[1960]: E0128 02:03:00.346207 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:00.683987 kubelet[1960]: E0128 02:03:00.683443 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:00.984123 containerd[1601]: time="2026-01-28T02:03:00.981457907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,}" Jan 28 02:03:01.688088 kubelet[1960]: E0128 02:03:01.686817 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:01.801519 containerd[1601]: time="2026-01-28T02:03:01.797193407Z" level=error msg="Failed to destroy network for sandbox \"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:01.832802 systemd[1]: run-netns-cni\x2d3ed14d48\x2da897\x2d0ada\x2de29e\x2d5e2f6c8c449e.mount: Deactivated successfully. Jan 28 02:03:01.858985 containerd[1601]: time="2026-01-28T02:03:01.858140616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:01.867633 kubelet[1960]: E0128 02:03:01.867227 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:01.867633 kubelet[1960]: E0128 02:03:01.867373 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:01.867633 kubelet[1960]: E0128 02:03:01.867409 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:01.868528 kubelet[1960]: E0128 02:03:01.867470 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a6fc4c14c3aef6a32ed9a014e118dfa7caa4afc88f1e7b4993613e767a7c79f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxm9" podUID="3eaa438b-c98e-4a63-b138-6192c658da00" Jan 28 02:03:01.927052 kubelet[1960]: E0128 02:03:01.925094 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:01.935638 containerd[1601]: time="2026-01-28T02:03:01.935370426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:01.950204 containerd[1601]: time="2026-01-28T02:03:01.947164744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:02.115100 containerd[1601]: time="2026-01-28T02:03:02.114711539Z" level=error msg="Failed to destroy network for sandbox \"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.123822 systemd[1]: run-netns-cni\x2d900c544a\x2d9b22\x2d1385\x2dea60\x2db067f73d7afa.mount: Deactivated successfully. Jan 28 02:03:02.179488 containerd[1601]: time="2026-01-28T02:03:02.179223889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.227683 kubelet[1960]: E0128 02:03:02.216270 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.227683 kubelet[1960]: E0128 02:03:02.220293 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:03:02.227683 kubelet[1960]: E0128 02:03:02.220362 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:03:02.231052 kubelet[1960]: E0128 02:03:02.229397 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d25c62d07505293e74bff2c42321428969ce82f22abe1ae4b5cbd24a1cfc2d27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:03:02.409439 containerd[1601]: time="2026-01-28T02:03:02.408410329Z" level=error msg="Failed to destroy network for sandbox \"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.428186 systemd[1]: run-netns-cni\x2d9bc0e99d\x2d86c7\x2d2643\x2d5044\x2d2e1d1b583a1d.mount: Deactivated successfully. Jan 28 02:03:02.468218 containerd[1601]: time="2026-01-28T02:03:02.465290012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.468778 kubelet[1960]: E0128 02:03:02.468731 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.469149 kubelet[1960]: E0128 02:03:02.469122 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:02.469278 kubelet[1960]: E0128 02:03:02.469246 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:02.469515 kubelet[1960]: E0128 02:03:02.469398 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e16232da0a7d7b26f6689fdb0ea04c4ecb57f9cb88ef68b2b484f6282d2adc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:03:02.687693 kubelet[1960]: E0128 02:03:02.687466 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:02.691077 containerd[1601]: time="2026-01-28T02:03:02.690475697Z" level=error msg="Failed to destroy network for sandbox \"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.700993 systemd[1]: run-netns-cni\x2d3449b809\x2db70f\x2d57d3\x2d8dc2\x2df656ee7cb8e3.mount: Deactivated successfully. Jan 28 02:03:02.766186 containerd[1601]: time="2026-01-28T02:03:02.761797236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.768424 kubelet[1960]: E0128 02:03:02.762234 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:02.768424 kubelet[1960]: E0128 02:03:02.762312 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:03:02.768424 kubelet[1960]: E0128 02:03:02.762346 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:03:02.779491 kubelet[1960]: E0128 02:03:02.762399 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30041879b77a02bce129366d72c1375b3dde4307dbf827d9c4fe5886075d8db7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-c7z7j" podUID="36471742-e8b3-41d5-8572-474eef077778" Jan 28 02:03:03.164602 containerd[1601]: time="2026-01-28T02:03:03.160557004Z" level=error msg="Failed to destroy network for sandbox \"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.168504 systemd[1]: run-netns-cni\x2df13ebc71\x2ddcdc\x2d6239\x2da505\x2d260eb2cfe8e9.mount: Deactivated successfully. Jan 28 02:03:03.205789 containerd[1601]: time="2026-01-28T02:03:03.205729322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.208040 kubelet[1960]: E0128 02:03:03.207790 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.210153 kubelet[1960]: E0128 02:03:03.209694 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:03.210745 kubelet[1960]: E0128 02:03:03.210456 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:03.215210 kubelet[1960]: E0128 02:03:03.215085 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a5595ea35122ae41386a4cf84910ab4aac1527a9849ebbb4ede8daa2287259\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t45sz" podUID="1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6" Jan 28 02:03:03.260613 containerd[1601]: time="2026-01-28T02:03:03.260525128Z" level=error msg="Failed to destroy network for sandbox \"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.263518 systemd[1]: run-netns-cni\x2d4769521c\x2d6069\x2dfcf0\x2db3af\x2dadc9cbfde667.mount: Deactivated successfully. Jan 28 02:03:03.286032 containerd[1601]: time="2026-01-28T02:03:03.281376724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.286287 kubelet[1960]: E0128 02:03:03.282008 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:03.286287 kubelet[1960]: E0128 02:03:03.282072 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:03.286287 kubelet[1960]: E0128 02:03:03.282097 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:03.286427 kubelet[1960]: E0128 02:03:03.282142 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f01c25495a84b2abd4fa054ca666b1789f2ffd7a29182d60c854b361c64552e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:03:03.695268 kubelet[1960]: E0128 02:03:03.690086 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:04.694553 kubelet[1960]: E0128 02:03:04.690616 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:04.927596 containerd[1601]: time="2026-01-28T02:03:04.927211423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:05.707125 kubelet[1960]: E0128 02:03:05.703554 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:05.987482 kubelet[1960]: E0128 02:03:05.973536 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:06.709588 kubelet[1960]: E0128 02:03:06.709420 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:06.999190 containerd[1601]: time="2026-01-28T02:03:06.992752316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:07.324439 containerd[1601]: time="2026-01-28T02:03:07.324312050Z" level=error msg="Failed to destroy network for sandbox \"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:07.334676 systemd[1]: run-netns-cni\x2d3feb9c69\x2d4a19\x2d5317\x2d47fd\x2d428fc518720a.mount: Deactivated successfully. Jan 28 02:03:07.398638 containerd[1601]: time="2026-01-28T02:03:07.398572637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:07.406362 kubelet[1960]: E0128 02:03:07.400042 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:07.406362 kubelet[1960]: E0128 02:03:07.400113 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:07.406362 kubelet[1960]: E0128 02:03:07.400139 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:07.408018 kubelet[1960]: E0128 02:03:07.400207 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f2e337cd471e8ad4a3036292101577dbd2c1d42517f7f930cba48f0b816715b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:03:08.139069 kubelet[1960]: E0128 02:03:08.138823 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:09.348232 kubelet[1960]: E0128 02:03:09.345441 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:09.652191 containerd[1601]: time="2026-01-28T02:03:09.650466363Z" level=error msg="Failed to destroy network for sandbox \"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:09.669835 systemd[1]: run-netns-cni\x2d9a643484\x2d9550\x2d5dc7\x2d015e\x2d0b5dd2968ae8.mount: Deactivated successfully. Jan 28 02:03:09.680705 containerd[1601]: time="2026-01-28T02:03:09.679168497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:09.682620 kubelet[1960]: E0128 02:03:09.679477 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:09.682620 kubelet[1960]: E0128 02:03:09.679555 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:03:09.682620 kubelet[1960]: E0128 02:03:09.679586 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:03:09.682761 kubelet[1960]: E0128 02:03:09.679693 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be724d479b4271b2cd8c92c7cdbede7b697cba9de12b64885df1a68d27a49c6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:03:10.350403 kubelet[1960]: E0128 02:03:10.350344 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:11.352573 kubelet[1960]: E0128 02:03:11.352532 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:12.358778 kubelet[1960]: E0128 02:03:12.358722 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:12.921533 containerd[1601]: time="2026-01-28T02:03:12.920784962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:13.361203 kubelet[1960]: E0128 02:03:13.359463 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:13.454751 containerd[1601]: time="2026-01-28T02:03:13.442218900Z" level=error msg="Failed to destroy network for sandbox \"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:13.465792 systemd[1]: run-netns-cni\x2db64c0c08\x2d84c5\x2da123\x2d4a8f\x2dbc15772141bf.mount: Deactivated successfully. Jan 28 02:03:13.500793 containerd[1601]: time="2026-01-28T02:03:13.500229747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:13.506439 kubelet[1960]: E0128 02:03:13.504654 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:13.506439 kubelet[1960]: E0128 02:03:13.504723 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:03:13.506439 kubelet[1960]: E0128 02:03:13.504757 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:03:13.506684 kubelet[1960]: E0128 02:03:13.504816 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9b914bf8cda23b471a860cd4cc3b34d08ca44de62e79eac2706e4f7b24d130f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:03:13.921490 containerd[1601]: time="2026-01-28T02:03:13.918112665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:14.370329 kubelet[1960]: E0128 02:03:14.370185 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:14.925091 kubelet[1960]: E0128 02:03:14.924411 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:14.931296 containerd[1601]: time="2026-01-28T02:03:14.928079436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:14.938614 containerd[1601]: time="2026-01-28T02:03:14.937177903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:14.973002 containerd[1601]: time="2026-01-28T02:03:14.965385087Z" level=error msg="Failed to destroy network for sandbox \"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.020646 systemd[1]: run-netns-cni\x2da92c8e81\x2d6060\x2d5934\x2d5616\x2d22e3eb457c4d.mount: Deactivated successfully. Jan 28 02:03:15.128113 containerd[1601]: time="2026-01-28T02:03:15.125203169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.131479 kubelet[1960]: E0128 02:03:15.131419 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.131758 kubelet[1960]: E0128 02:03:15.131729 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:15.132056 kubelet[1960]: E0128 02:03:15.132027 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:15.136068 kubelet[1960]: E0128 02:03:15.135205 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05138757c1f5a8440f3266c6eb7b55c5414b3b168f457afb591c8e25bdc7b510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:03:15.374325 kubelet[1960]: E0128 02:03:15.374093 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:15.615728 containerd[1601]: time="2026-01-28T02:03:15.615508459Z" level=error msg="Failed to destroy network for sandbox \"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.618686 systemd[1]: run-netns-cni\x2d89aaab5a\x2d5047\x2d8454\x2d9fe9\x2d692bbc329192.mount: Deactivated successfully. Jan 28 02:03:15.689649 containerd[1601]: time="2026-01-28T02:03:15.678813803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.693816 kubelet[1960]: E0128 02:03:15.686607 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:15.693816 kubelet[1960]: E0128 02:03:15.686962 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:15.693816 kubelet[1960]: E0128 02:03:15.687008 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:15.696182 kubelet[1960]: E0128 02:03:15.687086 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4f36e8cc1cd13a57100a8ddcc3184c2cb5f140bc67ca2c2e4020ed0654a6019\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:03:15.964075 containerd[1601]: time="2026-01-28T02:03:15.947659486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:16.390671 kubelet[1960]: E0128 02:03:16.381410 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:16.694720 containerd[1601]: time="2026-01-28T02:03:16.694660368Z" level=error msg="Failed to destroy network for sandbox \"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:16.713173 systemd[1]: run-netns-cni\x2dade4f803\x2d3196\x2d90e5\x2d50e2\x2d9d46d31b73e3.mount: Deactivated successfully. Jan 28 02:03:16.741459 containerd[1601]: time="2026-01-28T02:03:16.739804216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:16.744290 kubelet[1960]: E0128 02:03:16.744234 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:16.744584 kubelet[1960]: E0128 02:03:16.744552 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:16.744804 kubelet[1960]: E0128 02:03:16.744775 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:16.745960 kubelet[1960]: E0128 02:03:16.745737 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"616d41ac162c9e238ca5074cce2aee88297c35372baedf53a2f0c495690499a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxm9" podUID="3eaa438b-c98e-4a63-b138-6192c658da00" Jan 28 02:03:16.933977 kubelet[1960]: E0128 02:03:16.932722 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:16.939679 containerd[1601]: time="2026-01-28T02:03:16.936482201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,}" Jan 28 02:03:16.967491 containerd[1601]: time="2026-01-28T02:03:16.962575564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:17.519545 kubelet[1960]: E0128 02:03:17.395069 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:17.994078 containerd[1601]: time="2026-01-28T02:03:17.991310708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:18.401163 kubelet[1960]: E0128 02:03:18.400984 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:19.188725 containerd[1601]: time="2026-01-28T02:03:19.188322244Z" level=error msg="Failed to destroy network for sandbox \"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.199685 systemd[1]: run-netns-cni\x2d569d8f67\x2dc6cc\x2d6179\x2dc33b\x2db251a04c4f4f.mount: Deactivated successfully. Jan 28 02:03:19.250136 containerd[1601]: time="2026-01-28T02:03:19.249762577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.252421 kubelet[1960]: E0128 02:03:19.251104 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.252421 kubelet[1960]: E0128 02:03:19.251206 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:03:19.252421 kubelet[1960]: E0128 02:03:19.251239 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" Jan 28 02:03:19.257799 kubelet[1960]: E0128 02:03:19.251291 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f645d1367e27aee12bc3b59357f2922cfb3019acabbd35507fa49b84ef6b50e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:03:19.295980 containerd[1601]: time="2026-01-28T02:03:19.295783826Z" level=error msg="Failed to destroy network for sandbox \"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.337183 systemd[1]: run-netns-cni\x2d3bbab6c0\x2dc01c\x2d8ad1\x2d0a9e\x2de555e464e970.mount: Deactivated successfully. Jan 28 02:03:19.365207 containerd[1601]: time="2026-01-28T02:03:19.349292728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.365587 kubelet[1960]: E0128 02:03:19.349634 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.365587 kubelet[1960]: E0128 02:03:19.349701 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:03:19.365587 kubelet[1960]: E0128 02:03:19.349732 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c7z7j" Jan 28 02:03:19.367148 kubelet[1960]: E0128 02:03:19.349799 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-c7z7j_default(36471742-e8b3-41d5-8572-474eef077778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81f9bf8a7e28126cb09be1a2be0882d7feed23227c23ad38ead5febcc2c0c619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-c7z7j" podUID="36471742-e8b3-41d5-8572-474eef077778" Jan 28 02:03:19.407115 kubelet[1960]: E0128 02:03:19.403633 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:19.454652 containerd[1601]: time="2026-01-28T02:03:19.433567696Z" level=error msg="Failed to destroy network for sandbox \"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.460200 systemd[1]: run-netns-cni\x2d6de40a29\x2d92b1\x2d97c1\x2d2b91\x2d3ce01b65a50e.mount: Deactivated successfully. Jan 28 02:03:19.524195 containerd[1601]: time="2026-01-28T02:03:19.523825603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.524685 kubelet[1960]: E0128 02:03:19.524641 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.524766 kubelet[1960]: E0128 02:03:19.524710 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:19.524766 kubelet[1960]: E0128 02:03:19.524736 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:19.535064 kubelet[1960]: E0128 02:03:19.534648 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13334ed8572fd24b5255d73fc7f3515effd11e8d4e4177e45306a6219fa06c04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:03:19.825087 containerd[1601]: time="2026-01-28T02:03:19.815172959Z" level=error msg="Failed to destroy network for sandbox \"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.843461 systemd[1]: run-netns-cni\x2de3389ca7\x2d42a1\x2d79e4\x2d737f\x2dc80ef85d991a.mount: Deactivated successfully. Jan 28 02:03:19.870968 containerd[1601]: time="2026-01-28T02:03:19.868146661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.873169 kubelet[1960]: E0128 02:03:19.870387 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:19.873169 kubelet[1960]: E0128 02:03:19.870481 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:19.873169 kubelet[1960]: E0128 02:03:19.870658 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:19.874353 kubelet[1960]: E0128 02:03:19.870993 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33c29586c79b5532511dce07a959c1e898d8874411ef2bea25cb831779db4814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t45sz" podUID="1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6" Jan 28 02:03:20.341023 kubelet[1960]: E0128 02:03:20.340343 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:20.409120 kubelet[1960]: E0128 02:03:20.409074 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:20.927638 containerd[1601]: time="2026-01-28T02:03:20.927258365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:21.424820 kubelet[1960]: E0128 02:03:21.422283 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:22.171447 containerd[1601]: time="2026-01-28T02:03:22.171270219Z" level=error msg="Failed to destroy network for sandbox \"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:22.181472 systemd[1]: run-netns-cni\x2dbcc51282\x2d3cb3\x2d22d2\x2d6f7a\x2d00a4d03411c7.mount: Deactivated successfully. Jan 28 02:03:22.202043 containerd[1601]: time="2026-01-28T02:03:22.199079651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:22.202235 kubelet[1960]: E0128 02:03:22.200066 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:22.202235 kubelet[1960]: E0128 02:03:22.200129 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:03:22.202235 kubelet[1960]: E0128 02:03:22.200157 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krgpk" Jan 28 02:03:22.204311 kubelet[1960]: E0128 02:03:22.200212 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0925a17919cc8bb65e3185694de203679efded2bc94ce98df55bc6222e22ec1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:03:22.424138 kubelet[1960]: E0128 02:03:22.423087 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:23.427262 kubelet[1960]: E0128 02:03:23.426613 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:23.925998 containerd[1601]: time="2026-01-28T02:03:23.925305747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:24.429731 kubelet[1960]: E0128 02:03:24.429415 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:25.235126 containerd[1601]: time="2026-01-28T02:03:25.232025775Z" level=error msg="Failed to destroy network for sandbox \"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:25.237229 systemd[1]: run-netns-cni\x2dfdd7c236\x2d41d1\x2d6d3c\x2d557c\x2d190cc351db06.mount: Deactivated successfully. Jan 28 02:03:25.289456 containerd[1601]: time="2026-01-28T02:03:25.289317449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:25.291711 kubelet[1960]: E0128 02:03:25.291590 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:25.292030 kubelet[1960]: E0128 02:03:25.291735 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:03:25.292030 kubelet[1960]: E0128 02:03:25.291766 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" Jan 28 02:03:25.300091 kubelet[1960]: E0128 02:03:25.295043 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00528b662055121e40dc263badab38fd516c57893786c516b57d241be22e1874\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:03:25.437251 kubelet[1960]: E0128 02:03:25.434474 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:25.928112 containerd[1601]: time="2026-01-28T02:03:25.927213333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:26.435456 kubelet[1960]: E0128 02:03:26.435392 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:26.606156 containerd[1601]: time="2026-01-28T02:03:26.606089318Z" level=error msg="Failed to destroy network for sandbox \"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:26.618764 systemd[1]: run-netns-cni\x2dc3f46f47\x2d54e0\x2dd3c5\x2df090\x2dc625a77d85e6.mount: Deactivated successfully. Jan 28 02:03:26.637280 containerd[1601]: time="2026-01-28T02:03:26.637161473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:26.639717 kubelet[1960]: E0128 02:03:26.639443 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:26.639717 kubelet[1960]: E0128 02:03:26.639541 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:26.639717 kubelet[1960]: E0128 02:03:26.639583 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" Jan 28 02:03:26.642603 kubelet[1960]: E0128 02:03:26.639723 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45b77ae993004f77cd43de5d06c07d08b0b42a9d7bda3cce5014fd560b822113\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:03:27.440769 kubelet[1960]: E0128 02:03:27.440588 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:28.443477 kubelet[1960]: E0128 02:03:28.441728 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:29.180630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560489483.mount: Deactivated successfully. Jan 28 02:03:29.341997 containerd[1601]: time="2026-01-28T02:03:29.339277346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:29.347561 containerd[1601]: time="2026-01-28T02:03:29.345656747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 02:03:29.353685 containerd[1601]: time="2026-01-28T02:03:29.351115022Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:29.375665 containerd[1601]: time="2026-01-28T02:03:29.375481831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:29.403008 containerd[1601]: time="2026-01-28T02:03:29.386992730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m10.747813188s" Jan 28 02:03:29.403008 containerd[1601]: time="2026-01-28T02:03:29.402408171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 02:03:29.443645 kubelet[1960]: E0128 02:03:29.443080 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:29.526234 containerd[1601]: time="2026-01-28T02:03:29.523713581Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 02:03:29.620507 containerd[1601]: time="2026-01-28T02:03:29.620267687Z" level=info msg="Container 08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:03:29.689558 containerd[1601]: time="2026-01-28T02:03:29.688609477Z" level=info msg="CreateContainer within sandbox \"7a8ac3c2426909a64ef2174e407c09cff49228e7d03ef8f6212ba8c1ee77daa5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e\"" Jan 28 02:03:29.695011 containerd[1601]: time="2026-01-28T02:03:29.694107197Z" level=info msg="StartContainer for \"08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e\"" Jan 28 02:03:29.699578 containerd[1601]: time="2026-01-28T02:03:29.699532698Z" level=info msg="connecting to shim 08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e" address="unix:///run/containerd/s/a207b2290cacd6be4ade278567ccf17e2980d2afc34fbeb68cfeb5596dd10f31" protocol=ttrpc version=3 Jan 28 02:03:30.014281 systemd[1]: Started cri-containerd-08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e.scope - libcontainer container 08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e. Jan 28 02:03:30.248000 audit: BPF prog-id=102 op=LOAD Jan 28 02:03:30.302942 kernel: audit: type=1334 audit(1769565810.248:394): prog-id=102 op=LOAD Jan 28 02:03:30.303142 kernel: audit: type=1300 audit(1769565810.248:394): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.248000 audit: BPF prog-id=103 op=LOAD Jan 28 02:03:30.350011 kernel: audit: type=1327 audit(1769565810.248:394): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.350201 kernel: audit: type=1334 audit(1769565810.248:395): prog-id=103 op=LOAD Jan 28 02:03:30.350246 kernel: audit: type=1300 audit(1769565810.248:395): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.390191 kernel: audit: type=1327 audit(1769565810.248:395): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.248000 audit: BPF prog-id=103 op=UNLOAD Jan 28 02:03:30.436407 kernel: audit: type=1334 audit(1769565810.248:396): prog-id=103 op=UNLOAD Jan 28 02:03:30.441359 kernel: audit: type=1300 audit(1769565810.248:396): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.447948 kubelet[1960]: E0128 02:03:30.446522 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:30.483318 kernel: audit: type=1327 audit(1769565810.248:396): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.521741 kernel: audit: type=1334 audit(1769565810.248:397): prog-id=102 op=UNLOAD Jan 28 02:03:30.248000 audit: BPF prog-id=102 op=UNLOAD Jan 28 02:03:30.248000 audit[3512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.248000 audit: BPF prog-id=104 op=LOAD Jan 28 02:03:30.248000 audit[3512]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2112 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:30.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038616432383964626563636334393231336461643732613536326538 Jan 28 02:03:30.773253 containerd[1601]: time="2026-01-28T02:03:30.772513284Z" level=info msg="StartContainer for \"08ad289dbeccc49213dad72a562e802056c132f3b234788da52cb0999b1f985e\" returns successfully" Jan 28 02:03:30.923327 containerd[1601]: time="2026-01-28T02:03:30.918736651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:30.923327 containerd[1601]: time="2026-01-28T02:03:30.922992169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:31.233569 containerd[1601]: time="2026-01-28T02:03:31.231955822Z" level=error msg="Failed to destroy network for sandbox \"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.239482 systemd[1]: run-netns-cni\x2d75f93a9a\x2dee08\x2d3e7b\x2d9db2\x2da52c36828533.mount: Deactivated successfully. Jan 28 02:03:31.243975 containerd[1601]: time="2026-01-28T02:03:31.243671548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.244914 kubelet[1960]: E0128 02:03:31.244456 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.244914 kubelet[1960]: E0128 02:03:31.244539 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:31.244914 kubelet[1960]: E0128 02:03:31.244574 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5zdgq" Jan 28 02:03:31.245080 kubelet[1960]: E0128 02:03:31.244703 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45ba270e5a9def9ce63ae6ce7035f0d06cd6c512285605e6a9d923fd9796c8b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:03:31.367973 kubelet[1960]: E0128 02:03:31.367809 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:31.464561 kubelet[1960]: E0128 02:03:31.456610 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:31.522783 containerd[1601]: time="2026-01-28T02:03:31.520364217Z" level=error msg="Failed to destroy network for sandbox \"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.541525 systemd[1]: run-netns-cni\x2da4bd8492\x2d678c\x2d2692\x2d022a\x2d13d73971ebf0.mount: Deactivated successfully. Jan 28 02:03:31.545113 kubelet[1960]: E0128 02:03:31.544056 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.545113 kubelet[1960]: E0128 02:03:31.544120 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:31.545113 kubelet[1960]: E0128 02:03:31.544202 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54df6f8c4d-bq29n" Jan 28 02:03:31.545313 containerd[1601]: time="2026-01-28T02:03:31.543692200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:31.545470 kubelet[1960]: E0128 02:03:31.544256 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8abea17b5883ae132134ab434287e672dfacc30ec59fffc4c8c5d45c867ea20d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:03:31.577828 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 02:03:31.577984 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 02:03:31.921020 kubelet[1960]: E0128 02:03:31.920693 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:31.934634 kubelet[1960]: E0128 02:03:31.927749 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:31.934791 containerd[1601]: time="2026-01-28T02:03:31.931714490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:31.934791 containerd[1601]: time="2026-01-28T02:03:31.934218386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:32.436735 kubelet[1960]: E0128 02:03:32.418722 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:32.457171 kubelet[1960]: E0128 02:03:32.457059 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:32.934171 kubelet[1960]: I0128 02:03:32.934040 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kt9ff" podStartSLOduration=6.528948449 podStartE2EDuration="2m9.934016307s" podCreationTimestamp="2026-01-28 02:01:23 +0000 UTC" firstStartedPulling="2026-01-28 02:01:26.008393434 +0000 UTC m=+47.020105140" lastFinishedPulling="2026-01-28 02:03:29.413461292 +0000 UTC m=+170.425172998" observedRunningTime="2026-01-28 02:03:31.601112206 +0000 UTC m=+172.612824082" watchObservedRunningTime="2026-01-28 02:03:32.934016307 +0000 UTC m=+173.945728013" Jan 28 02:03:33.471235 kubelet[1960]: E0128 02:03:33.470993 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:33.490578 kubelet[1960]: E0128 02:03:33.489343 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:32.938 [INFO][3698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:32.980 [INFO][3698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" iface="eth0" netns="/var/run/netns/cni-a21dc267-8a62-d9a6-d63a-fd9eed58df9f" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:32.986 [INFO][3698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" iface="eth0" netns="/var/run/netns/cni-a21dc267-8a62-d9a6-d63a-fd9eed58df9f" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:32.990 [INFO][3698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" iface="eth0" netns="/var/run/netns/cni-a21dc267-8a62-d9a6-d63a-fd9eed58df9f" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:33.191 [INFO][3698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:33.192 [INFO][3698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:33.426 [INFO][3750] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" HandleID="k8s-pod-network.2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:33.427 [INFO][3750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:33.636036 containerd[1601]: 2026-01-28 02:03:33.428 [INFO][3750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:33.645131 containerd[1601]: 2026-01-28 02:03:33.529 [WARNING][3750] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" HandleID="k8s-pod-network.2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:33.645131 containerd[1601]: 2026-01-28 02:03:33.539 [INFO][3750] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" HandleID="k8s-pod-network.2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:33.645131 containerd[1601]: 2026-01-28 02:03:33.578 [INFO][3750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:33.645131 containerd[1601]: 2026-01-28 02:03:33.604 [INFO][3698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6" Jan 28 02:03:33.638547 systemd[1]: run-netns-cni\x2da21dc267\x2d8a62\x2dd9a6\x2dd63a\x2dfd9eed58df9f.mount: Deactivated successfully. Jan 28 02:03:33.672973 containerd[1601]: time="2026-01-28T02:03:33.652705099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:33.685211 kubelet[1960]: E0128 02:03:33.653332 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:33.685211 kubelet[1960]: E0128 02:03:33.653515 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:33.685211 kubelet[1960]: E0128 02:03:33.653542 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwxm9" Jan 28 02:03:33.686713 kubelet[1960]: E0128 02:03:33.656172 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwxm9_kube-system(3eaa438b-c98e-4a63-b138-6192c658da00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2943d67effbc603ecef4e8e42d4317ff813d98d0ac9d58e935ac151de5a80fb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwxm9" podUID="3eaa438b-c98e-4a63-b138-6192c658da00" Jan 28 02:03:33.944168 containerd[1601]: time="2026-01-28T02:03:33.929033541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,}" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.345 [INFO][3704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.346 [INFO][3704] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" iface="eth0" netns="/var/run/netns/cni-d48cce4a-392c-0589-8ed4-84ffb5d0db0b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.346 [INFO][3704] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" iface="eth0" netns="/var/run/netns/cni-d48cce4a-392c-0589-8ed4-84ffb5d0db0b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.348 [INFO][3704] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" iface="eth0" netns="/var/run/netns/cni-d48cce4a-392c-0589-8ed4-84ffb5d0db0b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.348 [INFO][3704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.348 [INFO][3704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.571 [INFO][3757] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" HandleID="k8s-pod-network.9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.572 [INFO][3757] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:34.073782 containerd[1601]: 2026-01-28 02:03:33.585 [INFO][3757] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:34.095147 containerd[1601]: 2026-01-28 02:03:33.753 [WARNING][3757] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" HandleID="k8s-pod-network.9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:34.095147 containerd[1601]: 2026-01-28 02:03:33.784 [INFO][3757] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" HandleID="k8s-pod-network.9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:34.095147 containerd[1601]: 2026-01-28 02:03:33.869 [INFO][3757] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:34.095147 containerd[1601]: 2026-01-28 02:03:33.908 [INFO][3704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b" Jan 28 02:03:34.106694 systemd[1]: run-netns-cni\x2dd48cce4a\x2d392c\x2d0589\x2d8ed4\x2d84ffb5d0db0b.mount: Deactivated successfully. Jan 28 02:03:34.177616 containerd[1601]: time="2026-01-28T02:03:34.174754175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:34.178204 kubelet[1960]: E0128 02:03:34.176924 1960 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:03:34.178204 kubelet[1960]: E0128 02:03:34.177020 1960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:34.178204 kubelet[1960]: E0128 02:03:34.177054 1960 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t45sz" Jan 28 02:03:34.178516 kubelet[1960]: E0128 02:03:34.177120 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t45sz_kube-system(1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a4d76491f02e2ea430cdd40d2b70b8a1655ba7da9d58be3d90bc2b677eac81b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t45sz" podUID="1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6" Jan 28 02:03:34.479382 kubelet[1960]: E0128 02:03:34.473821 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:34.630078 kubelet[1960]: E0128 02:03:34.628266 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:34.636532 kubelet[1960]: E0128 02:03:34.634626 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:34.647432 containerd[1601]: time="2026-01-28T02:03:34.635305242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:34.647432 containerd[1601]: time="2026-01-28T02:03:34.644572614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,}" Jan 28 02:03:34.984971 containerd[1601]: time="2026-01-28T02:03:34.982809763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:36.825039 kubelet[1960]: E0128 02:03:36.646542 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:36.834211 containerd[1601]: time="2026-01-28T02:03:36.686754889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:38.120799 kubelet[1960]: E0128 02:03:37.891629 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:38.241658 kubelet[1960]: E0128 02:03:38.236824 1960 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.321s" Jan 28 02:03:38.894054 kubelet[1960]: E0128 02:03:38.893771 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:38.925102 containerd[1601]: time="2026-01-28T02:03:38.925009726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:39.877170 systemd-networkd[1507]: cali41f42be1993: Link UP Jan 28 02:03:39.889069 systemd-networkd[1507]: cali41f42be1993: Gained carrier Jan 28 02:03:39.895290 kubelet[1960]: E0128 02:03:39.894831 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:40.002216 containerd[1601]: 2026-01-28 02:03:38.152 [INFO][3792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:40.002216 containerd[1601]: 2026-01-28 02:03:38.382 [INFO][3792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0 nginx-deployment-7fcdb87857- default 36471742-e8b3-41d5-8572-474eef077778 1360 0 2026-01-28 02:02:41 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.114 nginx-deployment-7fcdb87857-c7z7j eth0 default [] [] [kns.default ksa.default.default] cali41f42be1993 [] [] }} ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-" Jan 28 02:03:40.002216 containerd[1601]: 2026-01-28 02:03:38.387 [INFO][3792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.002216 containerd[1601]: 2026-01-28 02:03:38.857 [INFO][3871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" HandleID="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.857 [INFO][3871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" HandleID="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138320), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"nginx-deployment-7fcdb87857-c7z7j", "timestamp":"2026-01-28 02:03:38.857022626 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.857 [INFO][3871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.861 [INFO][3871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.861 [INFO][3871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.910 [INFO][3871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" host="10.0.0.114" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:38.981 [INFO][3871] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:39.084 [INFO][3871] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:39.100 [INFO][3871] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:39.116 [INFO][3871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.003225 containerd[1601]: 2026-01-28 02:03:39.119 [INFO][3871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" host="10.0.0.114" Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.138 [INFO][3871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.178 [INFO][3871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" host="10.0.0.114" Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.234 [INFO][3871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" host="10.0.0.114" Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.234 [INFO][3871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" host="10.0.0.114" Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.234 [INFO][3871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:40.008142 containerd[1601]: 2026-01-28 02:03:39.234 [INFO][3871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" HandleID="k8s-pod-network.801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.008324 containerd[1601]: 2026-01-28 02:03:39.269 [INFO][3792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"36471742-e8b3-41d5-8572-474eef077778", ResourceVersion:"1360", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-c7z7j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali41f42be1993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:40.008324 containerd[1601]: 2026-01-28 02:03:39.270 [INFO][3792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.129/32] ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.008516 containerd[1601]: 2026-01-28 02:03:39.270 [INFO][3792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41f42be1993 ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.008516 containerd[1601]: 2026-01-28 02:03:39.896 [INFO][3792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.014389 containerd[1601]: 2026-01-28 02:03:39.899 [INFO][3792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"36471742-e8b3-41d5-8572-474eef077778", ResourceVersion:"1360", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d", Pod:"nginx-deployment-7fcdb87857-c7z7j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali41f42be1993", MAC:"02:78:13:a5:05:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:40.014563 containerd[1601]: 2026-01-28 02:03:39.978 [INFO][3792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" Namespace="default" Pod="nginx-deployment-7fcdb87857-c7z7j" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--c7z7j-eth0" Jan 28 02:03:40.035717 systemd-networkd[1507]: cali8736c23d684: Link UP Jan 28 02:03:40.063615 systemd-networkd[1507]: cali8736c23d684: Gained carrier Jan 28 02:03:40.412369 kubelet[1960]: E0128 02:03:40.402005 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:40.595740 containerd[1601]: 2026-01-28 02:03:38.612 [INFO][3828] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:40.595740 containerd[1601]: 2026-01-28 02:03:38.706 [INFO][3828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-csi--node--driver--krgpk-eth0 csi-node-driver- calico-system 15b582de-4a9d-49bf-b8af-da9b7c0dc36f 1061 0 2026-01-28 02:01:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.114 csi-node-driver-krgpk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8736c23d684 [] [] }} ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-" Jan 28 02:03:40.595740 containerd[1601]: 2026-01-28 02:03:38.706 [INFO][3828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.595740 containerd[1601]: 2026-01-28 02:03:38.975 [INFO][3883] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" HandleID="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Workload="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:38.976 [INFO][3883] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" HandleID="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Workload="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003beaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.114", "pod":"csi-node-driver-krgpk", "timestamp":"2026-01-28 02:03:38.975744139 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:38.976 [INFO][3883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.235 [INFO][3883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.236 [INFO][3883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.292 [INFO][3883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" host="10.0.0.114" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.445 [INFO][3883] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.577 [INFO][3883] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.609 [INFO][3883] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.775 [INFO][3883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:40.596283 containerd[1601]: 2026-01-28 02:03:39.775 [INFO][3883] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" host="10.0.0.114" Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.870 [INFO][3883] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.903 [INFO][3883] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" host="10.0.0.114" Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.984 [INFO][3883] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" host="10.0.0.114" Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.984 [INFO][3883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" host="10.0.0.114" Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.984 [INFO][3883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:40.596723 containerd[1601]: 2026-01-28 02:03:39.984 [INFO][3883] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" HandleID="k8s-pod-network.ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Workload="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.597091 containerd[1601]: 2026-01-28 02:03:40.025 [INFO][3828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-csi--node--driver--krgpk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15b582de-4a9d-49bf-b8af-da9b7c0dc36f", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"csi-node-driver-krgpk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8736c23d684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:40.597233 containerd[1601]: 2026-01-28 02:03:40.025 [INFO][3828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.130/32] ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.597233 containerd[1601]: 2026-01-28 02:03:40.025 [INFO][3828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8736c23d684 ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.597233 containerd[1601]: 2026-01-28 02:03:40.410 [INFO][3828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.597322 containerd[1601]: 2026-01-28 02:03:40.449 [INFO][3828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-csi--node--driver--krgpk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"15b582de-4a9d-49bf-b8af-da9b7c0dc36f", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d", Pod:"csi-node-driver-krgpk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8736c23d684", MAC:"52:04:0b:f5:5d:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:40.597475 containerd[1601]: 2026-01-28 02:03:40.549 [INFO][3828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" Namespace="calico-system" Pod="csi-node-driver-krgpk" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--krgpk-eth0" Jan 28 02:03:40.905970 kubelet[1960]: E0128 02:03:40.895590 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:41.039250 systemd-networkd[1507]: cali41f42be1993: Gained IPv6LL Jan 28 02:03:41.422457 systemd-networkd[1507]: cali8736c23d684: Gained IPv6LL Jan 28 02:03:41.467018 systemd-networkd[1507]: cali6ba3c2f0fbb: Link UP Jan 28 02:03:41.469335 systemd-networkd[1507]: cali6ba3c2f0fbb: Gained carrier Jan 28 02:03:41.787653 containerd[1601]: 2026-01-28 02:03:38.689 [INFO][3807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:41.787653 containerd[1601]: 2026-01-28 02:03:38.837 [INFO][3807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0 coredns-668d6bf9bc- kube-system 3eaa438b-c98e-4a63-b138-6192c658da00 1548 0 2026-01-28 01:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.114 coredns-668d6bf9bc-zwxm9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ba3c2f0fbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-" Jan 28 02:03:41.787653 containerd[1601]: 2026-01-28 02:03:38.837 [INFO][3807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.787653 containerd[1601]: 2026-01-28 02:03:39.098 [INFO][3892] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" HandleID="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:39.099 [INFO][3892] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" HandleID="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135410), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.114", "pod":"coredns-668d6bf9bc-zwxm9", "timestamp":"2026-01-28 02:03:39.09809592 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:39.100 [INFO][3892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.014 [INFO][3892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.024 [INFO][3892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.152 [INFO][3892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" host="10.0.0.114" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.534 [INFO][3892] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.835 [INFO][3892] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.852 [INFO][3892] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.886 [INFO][3892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:41.798606 containerd[1601]: 2026-01-28 02:03:40.886 [INFO][3892] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" host="10.0.0.114" Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:40.913 [INFO][3892] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:40.977 [INFO][3892] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" host="10.0.0.114" Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:41.094 [INFO][3892] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" host="10.0.0.114" Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:41.094 [INFO][3892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" host="10.0.0.114" Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:41.094 [INFO][3892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:41.799470 containerd[1601]: 2026-01-28 02:03:41.094 [INFO][3892] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" HandleID="k8s-pod-network.0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.799676 containerd[1601]: 2026-01-28 02:03:41.412 [INFO][3807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3eaa438b-c98e-4a63-b138-6192c658da00", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"coredns-668d6bf9bc-zwxm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba3c2f0fbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:41.801001 containerd[1601]: 2026-01-28 02:03:41.421 [INFO][3807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.131/32] ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.801001 containerd[1601]: 2026-01-28 02:03:41.421 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ba3c2f0fbb ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.801001 containerd[1601]: 2026-01-28 02:03:41.469 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.801209 containerd[1601]: 2026-01-28 02:03:41.472 [INFO][3807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3eaa438b-c98e-4a63-b138-6192c658da00", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f", Pod:"coredns-668d6bf9bc-zwxm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ba3c2f0fbb", MAC:"7a:2f:11:27:6c:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:41.801209 containerd[1601]: 2026-01-28 02:03:41.772 [INFO][3807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwxm9" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--zwxm9-eth0" Jan 28 02:03:41.896193 kubelet[1960]: E0128 02:03:41.896119 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:41.921382 containerd[1601]: time="2026-01-28T02:03:41.908020202Z" level=info msg="connecting to shim 801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d" address="unix:///run/containerd/s/a92fc5cb57384926ed838b215460989f0d9a5c8a989d022b96af53aaa9327c6d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:41.934285 containerd[1601]: time="2026-01-28T02:03:41.926083389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:03:41.998953 containerd[1601]: time="2026-01-28T02:03:41.991471021Z" level=info msg="connecting to shim ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d" address="unix:///run/containerd/s/10b58d964e50ea5aa9cf77205baf35d5ee877d6bd4112911c1b6e09126327931" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:42.594279 systemd-networkd[1507]: calie4e3e57d4e5: Link UP Jan 28 02:03:42.596272 systemd[1]: Started cri-containerd-ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d.scope - libcontainer container ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d. Jan 28 02:03:42.623495 systemd-networkd[1507]: calie4e3e57d4e5: Gained carrier Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:38.629 [INFO][3803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:38.811 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0 coredns-668d6bf9bc- kube-system 1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6 1551 0 2026-01-28 01:58:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.114 coredns-668d6bf9bc-t45sz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4e3e57d4e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:38.812 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:39.305 [INFO][3890] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" HandleID="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:39.306 [INFO][3890] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" HandleID="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000511a30), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.114", "pod":"coredns-668d6bf9bc-t45sz", "timestamp":"2026-01-28 02:03:39.305798401 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:39.306 [INFO][3890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.098 [INFO][3890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.099 [INFO][3890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.471 [INFO][3890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.803 [INFO][3890] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.848 [INFO][3890] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.870 [INFO][3890] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.905 [INFO][3890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.925 [INFO][3890] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:41.999 [INFO][3890] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:42.206 [INFO][3890] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:42.283 [INFO][3890] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:42.283 [INFO][3890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" host="10.0.0.114" Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:42.283 [INFO][3890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:42.766521 containerd[1601]: 2026-01-28 02:03:42.283 [INFO][3890] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" HandleID="k8s-pod-network.f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Workload="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.329 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6", ResourceVersion:"1551", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"coredns-668d6bf9bc-t45sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e3e57d4e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.538 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.132/32] ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.542 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4e3e57d4e5 ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.626 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.627 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6", ResourceVersion:"1551", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de", Pod:"coredns-668d6bf9bc-t45sz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4e3e57d4e5", MAC:"42:3e:2c:e1:b3:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:42.769258 containerd[1601]: 2026-01-28 02:03:42.717 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" Namespace="kube-system" Pod="coredns-668d6bf9bc-t45sz" WorkloadEndpoint="10.0.0.114-k8s-coredns--668d6bf9bc--t45sz-eth0" Jan 28 02:03:42.786068 systemd[1]: Started cri-containerd-801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d.scope - libcontainer container 801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d. Jan 28 02:03:42.822190 containerd[1601]: time="2026-01-28T02:03:42.822133445Z" level=info msg="connecting to shim 0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f" address="unix:///run/containerd/s/6608dff3d6fe1644a32bd3a09440a25251de2c6f8759df1e7ce260ae8746a7a3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:42.905209 kubelet[1960]: E0128 02:03:42.898755 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:42.963556 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 28 02:03:42.963683 kernel: audit: type=1334 audit(1769565822.946:399): prog-id=105 op=LOAD Jan 28 02:03:42.946000 audit: BPF prog-id=105 op=LOAD Jan 28 02:03:42.969059 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:42.974957 containerd[1601]: time="2026-01-28T02:03:42.974679266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:42.949000 audit: BPF prog-id=106 op=LOAD Jan 28 02:03:42.988490 kernel: audit: type=1334 audit(1769565822.949:400): prog-id=106 op=LOAD Jan 28 02:03:42.988424 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:42.949000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:43.044008 kernel: audit: type=1300 audit(1769565822.949:400): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:43.089485 kernel: audit: type=1327 audit(1769565822.949:400): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.949000 audit: BPF prog-id=106 op=UNLOAD Jan 28 02:03:43.099659 kernel: audit: type=1334 audit(1769565822.949:401): prog-id=106 op=UNLOAD Jan 28 02:03:43.233253 kernel: audit: type=1300 audit(1769565822.949:401): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.949000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:43.318754 kernel: audit: type=1327 audit(1769565822.949:401): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:43.325767 kernel: audit: type=1334 audit(1769565822.949:402): prog-id=107 op=LOAD Jan 28 02:03:43.326536 kernel: audit: type=1300 audit(1769565822.949:402): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:43.326589 kernel: audit: type=1327 audit(1769565822.949:402): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.949000 audit: BPF prog-id=107 op=LOAD Jan 28 02:03:42.949000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.949000 audit: BPF prog-id=108 op=LOAD Jan 28 02:03:42.949000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.950000 audit: BPF prog-id=108 op=UNLOAD Jan 28 02:03:42.950000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.950000 audit: BPF prog-id=107 op=UNLOAD Jan 28 02:03:42.950000 audit[4098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.950000 audit: BPF prog-id=109 op=LOAD Jan 28 02:03:42.950000 audit[4098]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4077 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830316166666439346162646335363139346463343264646635343961 Jan 28 02:03:42.968000 audit: BPF prog-id=110 op=LOAD Jan 28 02:03:42.971000 audit: BPF prog-id=111 op=LOAD Jan 28 02:03:42.971000 audit[4130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.971000 audit: BPF prog-id=111 op=UNLOAD Jan 28 02:03:42.971000 audit[4130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.971000 audit: BPF prog-id=112 op=LOAD Jan 28 02:03:42.971000 audit[4130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.971000 audit: BPF prog-id=113 op=LOAD Jan 28 02:03:42.971000 audit[4130]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.975000 audit: BPF prog-id=113 op=UNLOAD Jan 28 02:03:42.975000 audit[4130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.975000 audit: BPF prog-id=112 op=UNLOAD Jan 28 02:03:42.975000 audit[4130]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:42.975000 audit: BPF prog-id=114 op=LOAD Jan 28 02:03:42.975000 audit[4130]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4086 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:42.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165333835643434303763313231616330656130313565633964336232 Jan 28 02:03:43.227035 systemd-networkd[1507]: cali6ba3c2f0fbb: Gained IPv6LL Jan 28 02:03:43.856772 systemd-networkd[1507]: calic65cbbcb9c7: Link UP Jan 28 02:03:43.871928 systemd-networkd[1507]: calic65cbbcb9c7: Gained carrier Jan 28 02:03:43.888723 containerd[1601]: time="2026-01-28T02:03:43.888627783Z" level=info msg="connecting to shim f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de" address="unix:///run/containerd/s/d30d4fa81c08273cc71596cc163a72d8ef3aeffb412dce5825b0aaaf20c766d5" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:43.907430 kubelet[1960]: E0128 02:03:43.907313 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:43.915158 systemd-networkd[1507]: calie4e3e57d4e5: Gained IPv6LL Jan 28 02:03:43.919765 containerd[1601]: time="2026-01-28T02:03:43.919442773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,}" Jan 28 02:03:43.994254 systemd[1]: Started cri-containerd-0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f.scope - libcontainer container 0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f. Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:38.685 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:38.860 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0 calico-apiserver-6656f8f9d9- calico-apiserver 67521aee-68dc-4703-af3e-6a8c6df60cd8 1356 0 2026-01-28 01:58:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6656f8f9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.114 calico-apiserver-6656f8f9d9-spnd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic65cbbcb9c7 [] [] }} ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:38.862 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:39.502 [INFO][3899] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" HandleID="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:39.503 [INFO][3899] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" HandleID="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000121ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.114", "pod":"calico-apiserver-6656f8f9d9-spnd9", "timestamp":"2026-01-28 02:03:39.484581046 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:39.503 [INFO][3899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.285 [INFO][3899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.285 [INFO][3899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.496 [INFO][3899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.634 [INFO][3899] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.826 [INFO][3899] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.842 [INFO][3899] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.850 [INFO][3899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.850 [INFO][3899] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.877 [INFO][3899] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2 Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.922 [INFO][3899] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.991 [INFO][3899] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.133/26] block=192.168.101.128/26 handle="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:42.995 [INFO][3899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.133/26] handle="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" host="10.0.0.114" Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:43.251 [INFO][3899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:44.098667 containerd[1601]: 2026-01-28 02:03:43.252 [INFO][3899] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.133/26] IPv6=[] ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" HandleID="k8s-pod-network.184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:43.528 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0", GenerateName:"calico-apiserver-6656f8f9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"67521aee-68dc-4703-af3e-6a8c6df60cd8", ResourceVersion:"1356", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6656f8f9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"calico-apiserver-6656f8f9d9-spnd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic65cbbcb9c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:43.799 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.133/32] ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:43.805 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic65cbbcb9c7 ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:43.873 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:43.879 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0", GenerateName:"calico-apiserver-6656f8f9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"67521aee-68dc-4703-af3e-6a8c6df60cd8", ResourceVersion:"1356", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6656f8f9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2", Pod:"calico-apiserver-6656f8f9d9-spnd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic65cbbcb9c7", MAC:"3a:55:fd:27:52:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:44.109160 containerd[1601]: 2026-01-28 02:03:44.032 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-spnd9" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--spnd9-eth0" Jan 28 02:03:44.189517 containerd[1601]: time="2026-01-28T02:03:44.173430386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krgpk,Uid:15b582de-4a9d-49bf-b8af-da9b7c0dc36f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae385d4407c121ac0ea015ec9d3b29effa3d4db203344b28f404ec132387c56d\"" Jan 28 02:03:44.189517 containerd[1601]: time="2026-01-28T02:03:44.180224119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:03:44.219000 audit: BPF prog-id=115 op=LOAD Jan 28 02:03:44.231000 audit: BPF prog-id=116 op=LOAD Jan 28 02:03:44.231000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.231000 audit: BPF prog-id=116 op=UNLOAD Jan 28 02:03:44.231000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.231000 audit: BPF prog-id=117 op=LOAD Jan 28 02:03:44.231000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.231000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.232000 audit: BPF prog-id=118 op=LOAD Jan 28 02:03:44.232000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.232000 audit: BPF prog-id=118 op=UNLOAD Jan 28 02:03:44.232000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.232000 audit: BPF prog-id=117 op=UNLOAD Jan 28 02:03:44.232000 audit[4194]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.232000 audit: BPF prog-id=119 op=LOAD Jan 28 02:03:44.232000 audit[4194]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4168 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366262613038653763383862356535316135303238363465303563 Jan 28 02:03:44.251996 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:44.302129 containerd[1601]: time="2026-01-28T02:03:44.302007680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c7z7j,Uid:36471742-e8b3-41d5-8572-474eef077778,Namespace:default,Attempt:0,} returns sandbox id \"801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d\"" Jan 28 02:03:44.317000 audit: BPF prog-id=120 op=LOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3f70810 a2=98 a3=1fffffffffffffff items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.317000 audit: BPF prog-id=120 op=UNLOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffec3f707e0 a3=0 items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.317000 audit: BPF prog-id=121 op=LOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3f706f0 a2=94 a3=3 items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.317000 audit: BPF prog-id=121 op=UNLOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec3f706f0 a2=94 a3=3 items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.317000 audit: BPF prog-id=122 op=LOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec3f70730 a2=94 a3=7ffec3f70910 items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.317000 audit: BPF prog-id=122 op=UNLOAD Jan 28 02:03:44.317000 audit[4287]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffec3f70730 a2=94 a3=7ffec3f70910 items=0 ppid=3971 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 02:03:44.335000 audit: BPF prog-id=123 op=LOAD Jan 28 02:03:44.335000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5db8d970 a2=98 a3=3 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.335000 audit: BPF prog-id=123 op=UNLOAD Jan 28 02:03:44.335000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5db8d940 a3=0 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.344000 audit: BPF prog-id=124 op=LOAD Jan 28 02:03:44.344000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5db8d760 a2=94 a3=54428f items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.344000 audit: BPF prog-id=124 op=UNLOAD Jan 28 02:03:44.344000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5db8d760 a2=94 a3=54428f items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.344000 audit: BPF prog-id=125 op=LOAD Jan 28 02:03:44.344000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5db8d790 a2=94 a3=2 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.344000 audit: BPF prog-id=125 op=UNLOAD Jan 28 02:03:44.344000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5db8d790 a2=0 a3=2 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:44.517089 containerd[1601]: time="2026-01-28T02:03:44.494055555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:03:44.538822 containerd[1601]: time="2026-01-28T02:03:44.523461398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 02:03:44.538822 containerd[1601]: time="2026-01-28T02:03:44.523608681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:03:44.539114 kubelet[1960]: E0128 02:03:44.537078 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:03:44.539114 kubelet[1960]: E0128 02:03:44.537136 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:03:44.534221 systemd[1]: Started cri-containerd-f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de.scope - libcontainer container f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de. Jan 28 02:03:44.539468 kubelet[1960]: E0128 02:03:44.537510 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxqrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:03:44.551090 containerd[1601]: time="2026-01-28T02:03:44.551043037Z" level=info msg="connecting to shim 184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2" address="unix:///run/containerd/s/77ed08fed783728553b49fa5537c67e9b81db895b25559a67fee728d2ed4f349" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:44.564831 containerd[1601]: time="2026-01-28T02:03:44.563730453Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 28 02:03:44.713349 systemd-networkd[1507]: cali9b1afae00c3: Link UP Jan 28 02:03:44.714592 systemd-networkd[1507]: cali9b1afae00c3: Gained carrier Jan 28 02:03:44.790000 audit: BPF prog-id=126 op=LOAD Jan 28 02:03:44.792000 audit: BPF prog-id=127 op=LOAD Jan 28 02:03:44.792000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.792000 audit: BPF prog-id=127 op=UNLOAD Jan 28 02:03:44.792000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.813000 audit: BPF prog-id=128 op=LOAD Jan 28 02:03:44.813000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.813000 audit: BPF prog-id=129 op=LOAD Jan 28 02:03:44.813000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.813000 audit: BPF prog-id=129 op=UNLOAD Jan 28 02:03:44.813000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.814000 audit: BPF prog-id=128 op=UNLOAD Jan 28 02:03:44.814000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.814000 audit: BPF prog-id=130 op=LOAD Jan 28 02:03:44.814000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4238 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:44.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632346239663838376530626362653862323762343461353262643731 Jan 28 02:03:44.879730 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:44.909066 kubelet[1960]: E0128 02:03:44.908118 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:44.941590 systemd[1]: Started cri-containerd-184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2.scope - libcontainer container 184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2. Jan 28 02:03:44.948096 containerd[1601]: time="2026-01-28T02:03:44.945253313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwxm9,Uid:3eaa438b-c98e-4a63-b138-6192c658da00,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f\"" Jan 28 02:03:44.953580 kubelet[1960]: E0128 02:03:44.952997 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:39.232 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:39.574 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0 calico-kube-controllers-78fc6b544- calico-system 9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc 1339 0 2026-01-28 02:01:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78fc6b544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.114 calico-kube-controllers-78fc6b544-rfcfq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9b1afae00c3 [] [] }} ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:39.574 [INFO][3903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:41.777 [INFO][3964] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" HandleID="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Workload="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:41.777 [INFO][3964] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" HandleID="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Workload="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047a200), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.114", "pod":"calico-kube-controllers-78fc6b544-rfcfq", "timestamp":"2026-01-28 02:03:41.777272277 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:41.777 [INFO][3964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:43.265 [INFO][3964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:43.267 [INFO][3964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:43.466 [INFO][3964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:43.883 [INFO][3964] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.091 [INFO][3964] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.175 [INFO][3964] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.214 [INFO][3964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.214 [INFO][3964] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.244 [INFO][3964] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88 Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.445 [INFO][3964] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.613 [INFO][3964] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.134/26] block=192.168.101.128/26 handle="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.613 [INFO][3964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.134/26] handle="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" host="10.0.0.114" Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.613 [INFO][3964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:44.954292 containerd[1601]: 2026-01-28 02:03:44.613 [INFO][3964] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.134/26] IPv6=[] ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" HandleID="k8s-pod-network.94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Workload="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.676 [INFO][3903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0", GenerateName:"calico-kube-controllers-78fc6b544-", Namespace:"calico-system", SelfLink:"", UID:"9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc6b544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"calico-kube-controllers-78fc6b544-rfcfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b1afae00c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.677 [INFO][3903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.134/32] ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.677 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b1afae00c3 ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.716 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.784 [INFO][3903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0", GenerateName:"calico-kube-controllers-78fc6b544-", Namespace:"calico-system", SelfLink:"", UID:"9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78fc6b544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88", Pod:"calico-kube-controllers-78fc6b544-rfcfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b1afae00c3", MAC:"5e:82:6f:a0:1d:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:44.963761 containerd[1601]: 2026-01-28 02:03:44.912 [INFO][3903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" Namespace="calico-system" Pod="calico-kube-controllers-78fc6b544-rfcfq" WorkloadEndpoint="10.0.0.114-k8s-calico--kube--controllers--78fc6b544--rfcfq-eth0" Jan 28 02:03:45.096380 containerd[1601]: time="2026-01-28T02:03:45.094745258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t45sz,Uid:1f7a7a51-f1ca-4889-bd7c-61ed908ad5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de\"" Jan 28 02:03:45.097760 kubelet[1960]: E0128 02:03:45.097662 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:45.104000 audit: BPF prog-id=131 op=LOAD Jan 28 02:03:45.105000 audit: BPF prog-id=132 op=LOAD Jan 28 02:03:45.105000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.105000 audit: BPF prog-id=132 op=UNLOAD Jan 28 02:03:45.105000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.106000 audit: BPF prog-id=133 op=LOAD Jan 28 02:03:45.106000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.106000 audit: BPF prog-id=134 op=LOAD Jan 28 02:03:45.106000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.106000 audit: BPF prog-id=134 op=UNLOAD Jan 28 02:03:45.106000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.111000 audit: BPF prog-id=133 op=UNLOAD Jan 28 02:03:45.111000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.111000 audit: BPF prog-id=135 op=LOAD Jan 28 02:03:45.111000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4310 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138346563326335366466386331393733633066316539336233353166 Jan 28 02:03:45.119469 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:45.147000 audit: BPF prog-id=136 op=LOAD Jan 28 02:03:45.147000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5db8d650 a2=94 a3=1 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.147000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.147000 audit: BPF prog-id=136 op=UNLOAD Jan 28 02:03:45.147000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff5db8d650 a2=94 a3=1 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.147000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.152095 containerd[1601]: time="2026-01-28T02:03:45.147298258Z" level=info msg="connecting to shim 94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88" address="unix:///run/containerd/s/7026e338a69b06749f589ce1a02e3d2709d1df9fda72604edc190ea54e199a87" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:45.174000 audit: BPF prog-id=137 op=LOAD Jan 28 02:03:45.174000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5db8d640 a2=94 a3=4 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.174000 audit: BPF prog-id=137 op=UNLOAD Jan 28 02:03:45.174000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5db8d640 a2=0 a3=4 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.175000 audit: BPF prog-id=138 op=LOAD Jan 28 02:03:45.175000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff5db8d4a0 a2=94 a3=5 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.175000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.175000 audit: BPF prog-id=138 op=UNLOAD Jan 28 02:03:45.175000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff5db8d4a0 a2=0 a3=5 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.175000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.175000 audit: BPF prog-id=139 op=LOAD Jan 28 02:03:45.175000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5db8d6c0 a2=94 a3=6 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.175000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.175000 audit: BPF prog-id=139 op=UNLOAD Jan 28 02:03:45.175000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff5db8d6c0 a2=0 a3=6 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.175000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.176000 audit: BPF prog-id=140 op=LOAD Jan 28 02:03:45.176000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff5db8ce70 a2=94 a3=88 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.176000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.176000 audit: BPF prog-id=141 op=LOAD Jan 28 02:03:45.176000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff5db8ccf0 a2=94 a3=2 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.176000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.176000 audit: BPF prog-id=141 op=UNLOAD Jan 28 02:03:45.176000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff5db8cd20 a2=0 a3=7fff5db8ce20 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.176000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.177000 audit: BPF prog-id=140 op=UNLOAD Jan 28 02:03:45.177000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2ca9bd10 a2=0 a3=9e83ea891e030693 items=0 ppid=3971 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 02:03:45.391000 audit: BPF prog-id=142 op=LOAD Jan 28 02:03:45.391000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9b41a220 a2=98 a3=1999999999999999 items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.391000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.391000 audit: BPF prog-id=142 op=UNLOAD Jan 28 02:03:45.391000 audit[4413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe9b41a1f0 a3=0 items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.391000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.391000 audit: BPF prog-id=143 op=LOAD Jan 28 02:03:45.391000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9b41a100 a2=94 a3=ffff items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.391000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.391000 audit: BPF prog-id=143 op=UNLOAD Jan 28 02:03:45.391000 audit[4413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe9b41a100 a2=94 a3=ffff items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.391000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.391000 audit: BPF prog-id=144 op=LOAD Jan 28 02:03:45.391000 audit[4413]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9b41a140 a2=94 a3=7ffe9b41a320 items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.391000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.392000 audit: BPF prog-id=144 op=UNLOAD Jan 28 02:03:45.392000 audit[4413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe9b41a140 a2=94 a3=7ffe9b41a320 items=0 ppid=3971 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.392000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 02:03:45.402551 systemd[1]: Started cri-containerd-94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88.scope - libcontainer container 94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88. Jan 28 02:03:45.494128 systemd-networkd[1507]: cali061ed579d41: Link UP Jan 28 02:03:45.494753 systemd-networkd[1507]: cali061ed579d41: Gained carrier Jan 28 02:03:45.532000 audit: BPF prog-id=145 op=LOAD Jan 28 02:03:45.535000 audit: BPF prog-id=146 op=LOAD Jan 28 02:03:45.535000 audit[4400]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.535000 audit: BPF prog-id=146 op=UNLOAD Jan 28 02:03:45.535000 audit[4400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.569000 audit: BPF prog-id=147 op=LOAD Jan 28 02:03:45.576277 containerd[1601]: time="2026-01-28T02:03:45.576158230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-spnd9,Uid:67521aee-68dc-4703-af3e-6a8c6df60cd8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"184ec2c56df8c1973c0f1e93b351f8a7637086106952b6b90e996dbb59cfd5e2\"" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:42.984 [INFO][4104] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.238 [INFO][4104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0 calico-apiserver-6656f8f9d9- calico-apiserver 5a2efbc6-3a74-40a5-b192-41e159a7237c 1355 0 2026-01-28 01:58:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6656f8f9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.114 calico-apiserver-6656f8f9d9-6mpkc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali061ed579d41 [] [] }} ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.238 [INFO][4104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.893 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" HandleID="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.895 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" HandleID="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a72e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.114", "pod":"calico-apiserver-6656f8f9d9-6mpkc", "timestamp":"2026-01-28 02:03:44.893416798 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.895 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.895 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:44.895 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.039 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.100 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.174 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.188 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.207 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.207 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.240 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412 Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.400 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.468 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.135/26] block=192.168.101.128/26 handle="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.469 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.135/26] handle="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" host="10.0.0.114" Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.469 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:45.584331 containerd[1601]: 2026-01-28 02:03:45.469 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.135/26] IPv6=[] ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" HandleID="k8s-pod-network.b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Workload="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.484 [INFO][4104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0", GenerateName:"calico-apiserver-6656f8f9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a2efbc6-3a74-40a5-b192-41e159a7237c", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6656f8f9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"calico-apiserver-6656f8f9d9-6mpkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali061ed579d41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.488 [INFO][4104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.135/32] ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.489 [INFO][4104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali061ed579d41 ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.503 [INFO][4104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.511 [INFO][4104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0", GenerateName:"calico-apiserver-6656f8f9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a2efbc6-3a74-40a5-b192-41e159a7237c", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6656f8f9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412", Pod:"calico-apiserver-6656f8f9d9-6mpkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali061ed579d41", MAC:"f2:2b:3e:f2:37:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:45.585415 containerd[1601]: 2026-01-28 02:03:45.579 [INFO][4104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" Namespace="calico-apiserver" Pod="calico-apiserver-6656f8f9d9-6mpkc" WorkloadEndpoint="10.0.0.114-k8s-calico--apiserver--6656f8f9d9--6mpkc-eth0" Jan 28 02:03:45.569000 audit[4400]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.588000 audit: BPF prog-id=148 op=LOAD Jan 28 02:03:45.588000 audit[4400]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.588000 audit: BPF prog-id=148 op=UNLOAD Jan 28 02:03:45.588000 audit[4400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.588000 audit: BPF prog-id=147 op=UNLOAD Jan 28 02:03:45.588000 audit[4400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.588000 audit: BPF prog-id=149 op=LOAD Jan 28 02:03:45.588000 audit[4400]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4388 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:45.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663663356431646265653139363466343862373539613734316131 Jan 28 02:03:45.599555 systemd-networkd[1507]: calic65cbbcb9c7: Gained IPv6LL Jan 28 02:03:45.683530 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:45.836569 systemd-networkd[1507]: cali9b1afae00c3: Gained IPv6LL Jan 28 02:03:45.873036 containerd[1601]: time="2026-01-28T02:03:45.871801348Z" level=info msg="connecting to shim b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412" address="unix:///run/containerd/s/741cda463bd214b9a29c40ec4b925cccb3fc9777c07ead6b297155f1dfb150bd" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:45.880628 systemd-networkd[1507]: vxlan.calico: Link UP Jan 28 02:03:45.880639 systemd-networkd[1507]: vxlan.calico: Gained carrier Jan 28 02:03:45.909261 kubelet[1960]: E0128 02:03:45.908379 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:46.048823 systemd[1]: Started cri-containerd-b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412.scope - libcontainer container b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412. Jan 28 02:03:46.082000 audit: BPF prog-id=150 op=LOAD Jan 28 02:03:46.082000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdc9cf2c0 a2=98 a3=0 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=150 op=UNLOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcdc9cf290 a3=0 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=151 op=LOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdc9cf0d0 a2=94 a3=54428f items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=151 op=UNLOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcdc9cf0d0 a2=94 a3=54428f items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=152 op=LOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdc9cf100 a2=94 a3=2 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=152 op=UNLOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcdc9cf100 a2=0 a3=2 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=153 op=LOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcdc9ceeb0 a2=94 a3=4 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=153 op=UNLOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcdc9ceeb0 a2=94 a3=4 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=154 op=LOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcdc9cefb0 a2=94 a3=7ffcdc9cf130 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.083000 audit: BPF prog-id=154 op=UNLOAD Jan 28 02:03:46.083000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcdc9cefb0 a2=0 a3=7ffcdc9cf130 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.085000 audit: BPF prog-id=155 op=LOAD Jan 28 02:03:46.085000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcdc9ce6e0 a2=94 a3=2 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.085000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.085000 audit: BPF prog-id=155 op=UNLOAD Jan 28 02:03:46.085000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcdc9ce6e0 a2=0 a3=2 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.085000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.085000 audit: BPF prog-id=156 op=LOAD Jan 28 02:03:46.085000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcdc9ce7e0 a2=94 a3=30 items=0 ppid=3971 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.085000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 02:03:46.125000 audit: BPF prog-id=157 op=LOAD Jan 28 02:03:46.125000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb0a47900 a2=98 a3=0 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.125000 audit: BPF prog-id=157 op=UNLOAD Jan 28 02:03:46.125000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb0a478d0 a3=0 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.126000 audit: BPF prog-id=158 op=LOAD Jan 28 02:03:46.126000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb0a476f0 a2=94 a3=54428f items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.126000 audit: BPF prog-id=158 op=UNLOAD Jan 28 02:03:46.126000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb0a476f0 a2=94 a3=54428f items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.126000 audit: BPF prog-id=159 op=LOAD Jan 28 02:03:46.126000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb0a47720 a2=94 a3=2 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.127000 audit: BPF prog-id=159 op=UNLOAD Jan 28 02:03:46.127000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb0a47720 a2=0 a3=2 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.219000 audit: BPF prog-id=160 op=LOAD Jan 28 02:03:46.222000 audit: BPF prog-id=161 op=LOAD Jan 28 02:03:46.222000 audit[4501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.222000 audit: BPF prog-id=161 op=UNLOAD Jan 28 02:03:46.222000 audit[4501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.228000 audit: BPF prog-id=162 op=LOAD Jan 28 02:03:46.228000 audit[4501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.234000 audit: BPF prog-id=163 op=LOAD Jan 28 02:03:46.234000 audit[4501]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.238000 audit: BPF prog-id=163 op=UNLOAD Jan 28 02:03:46.238000 audit[4501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.239000 audit: BPF prog-id=162 op=UNLOAD Jan 28 02:03:46.239000 audit[4501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.240000 audit: BPF prog-id=164 op=LOAD Jan 28 02:03:46.240000 audit[4501]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4483 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237633362656664373637623566313637383966626465636533303165 Jan 28 02:03:46.249776 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:46.504184 containerd[1601]: time="2026-01-28T02:03:46.497486143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78fc6b544-rfcfq,Uid:9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"94f6c5d1dbee1964f48b759a741a154893da88e14053b024d862d8be59befd88\"" Jan 28 02:03:46.816000 audit: BPF prog-id=165 op=LOAD Jan 28 02:03:46.816000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb0a475e0 a2=94 a3=1 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.816000 audit: BPF prog-id=165 op=UNLOAD Jan 28 02:03:46.816000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb0a475e0 a2=94 a3=1 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.816000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.817977 systemd-networkd[1507]: cali061ed579d41: Gained IPv6LL Jan 28 02:03:46.868000 audit: BPF prog-id=166 op=LOAD Jan 28 02:03:46.868000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb0a475d0 a2=94 a3=4 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.868000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.869000 audit: BPF prog-id=166 op=UNLOAD Jan 28 02:03:46.869000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb0a475d0 a2=0 a3=4 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.869000 audit: BPF prog-id=167 op=LOAD Jan 28 02:03:46.869000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb0a47430 a2=94 a3=5 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.869000 audit: BPF prog-id=167 op=UNLOAD Jan 28 02:03:46.869000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb0a47430 a2=0 a3=5 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.869000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.879000 audit: BPF prog-id=168 op=LOAD Jan 28 02:03:46.879000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb0a47650 a2=94 a3=6 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.879000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.879000 audit: BPF prog-id=168 op=UNLOAD Jan 28 02:03:46.879000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb0a47650 a2=0 a3=6 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.879000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.879000 audit: BPF prog-id=169 op=LOAD Jan 28 02:03:46.879000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb0a46e00 a2=94 a3=88 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.879000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.880000 audit: BPF prog-id=170 op=LOAD Jan 28 02:03:46.880000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcb0a46c80 a2=94 a3=2 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.880000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.880000 audit: BPF prog-id=170 op=UNLOAD Jan 28 02:03:46.880000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcb0a46cb0 a2=0 a3=7ffcb0a46db0 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.880000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.881000 audit: BPF prog-id=169 op=UNLOAD Jan 28 02:03:46.881000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3d373d10 a2=0 a3=afa9f565ab76f318 items=0 ppid=3971 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 02:03:46.913819 containerd[1601]: time="2026-01-28T02:03:46.882541087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6656f8f9d9-6mpkc,Uid:5a2efbc6-3a74-40a5-b192-41e159a7237c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7c3befd767b5f16789fbdece301e6d027e3c48f7470151f82757396dfc0d412\"" Jan 28 02:03:46.917137 kubelet[1960]: E0128 02:03:46.908543 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:46.989000 audit: BPF prog-id=156 op=UNLOAD Jan 28 02:03:46.989000 audit[3971]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000840380 a2=0 a3=0 items=0 ppid=3942 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:46.989000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 02:03:47.180667 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Jan 28 02:03:47.667000 audit[4600]: NETFILTER_CFG table=mangle:65 family=2 entries=16 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:47.667000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe141128c0 a2=0 a3=7ffe141128ac items=0 ppid=3971 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:47.667000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:47.775000 audit[4603]: NETFILTER_CFG table=nat:66 family=2 entries=15 op=nft_register_chain pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:47.775000 audit[4603]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffce93b96e0 a2=0 a3=7ffce93b96cc items=0 ppid=3971 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:47.775000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:47.804000 audit[4599]: NETFILTER_CFG table=raw:67 family=2 entries=21 op=nft_register_chain pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:47.804000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe0ad5d1d0 a2=0 a3=7ffe0ad5d1bc items=0 ppid=3971 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:47.804000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:47.865047 systemd-networkd[1507]: cali2da071532ca: Link UP Jan 28 02:03:47.874167 systemd-networkd[1507]: cali2da071532ca: Gained carrier Jan 28 02:03:47.912445 kubelet[1960]: E0128 02:03:47.912368 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:48.035211 kernel: kauditd_printk_skb: 339 callbacks suppressed Jan 28 02:03:48.035374 kernel: audit: type=1325 audit(1769565827.831:520): table=filter:68 family=2 entries=264 op=nft_register_chain pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:47.831000 audit[4602]: NETFILTER_CFG table=filter:68 family=2 entries=264 op=nft_register_chain pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:47.831000 audit[4602]: SYSCALL arch=c000003e syscall=46 success=yes exit=153924 a0=3 a1=7ffcb3289b60 a2=0 a3=7ffcb3289b4c items=0 ppid=3971 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:47.831000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.038 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0 goldmane-666569f655- calico-system f4b6fba0-f381-4858-a71c-ba2619256e7e 1359 0 2026-01-28 01:58:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 10.0.0.114 goldmane-666569f655-5zdgq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2da071532ca [] [] }} ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.038 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.609 [INFO][4537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" HandleID="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Workload="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.610 [INFO][4537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" HandleID="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Workload="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de160), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.114", "pod":"goldmane-666569f655-5zdgq", "timestamp":"2026-01-28 02:03:46.609961362 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.610 [INFO][4537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.611 [INFO][4537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.611 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:46.930 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.109 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.287 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.329 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.386 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.390 [INFO][4537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.415 [INFO][4537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409 Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.472 [INFO][4537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.527 [INFO][4537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.136/26] block=192.168.101.128/26 handle="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.527 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.136/26] handle="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" host="10.0.0.114" Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.530 [INFO][4537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:48.136662 containerd[1601]: 2026-01-28 02:03:47.546 [INFO][4537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.136/26] IPv6=[] ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" HandleID="k8s-pod-network.5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Workload="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:47.592 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4b6fba0-f381-4858-a71c-ba2619256e7e", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"goldmane-666569f655-5zdgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2da071532ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:47.597 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.136/32] ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:47.597 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2da071532ca ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:47.869 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:47.873 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4b6fba0-f381-4858-a71c-ba2619256e7e", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409", Pod:"goldmane-666569f655-5zdgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2da071532ca", MAC:"96:15:04:66:d4:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:48.147685 containerd[1601]: 2026-01-28 02:03:48.047 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" Namespace="calico-system" Pod="goldmane-666569f655-5zdgq" WorkloadEndpoint="10.0.0.114-k8s-goldmane--666569f655--5zdgq-eth0" Jan 28 02:03:48.155552 kernel: audit: type=1300 audit(1769565827.831:520): arch=c000003e syscall=46 success=yes exit=153924 a0=3 a1=7ffcb3289b60 a2=0 a3=7ffcb3289b4c items=0 ppid=3971 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:48.155657 kernel: audit: type=1327 audit(1769565827.831:520): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.227000 audit[4625]: NETFILTER_CFG table=filter:69 family=2 entries=68 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:48.227000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7ffc23afcf20 a2=0 a3=7ffc23afcf0c items=0 ppid=3971 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:48.296949 kernel: audit: type=1325 audit(1769565828.227:521): table=filter:69 family=2 entries=68 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:48.361466 kernel: audit: type=1300 audit(1769565828.227:521): arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7ffc23afcf20 a2=0 a3=7ffc23afcf0c items=0 ppid=3971 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:48.361522 kernel: audit: type=1327 audit(1769565828.227:521): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.227000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.497288 systemd-networkd[1507]: cali3acfc123f7e: Link UP Jan 28 02:03:48.498665 systemd-networkd[1507]: cali3acfc123f7e: Gained carrier Jan 28 02:03:48.558454 containerd[1601]: time="2026-01-28T02:03:48.558345410Z" level=info msg="connecting to shim 5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409" address="unix:///run/containerd/s/eb56d6f80baee95680f09f5f49de6d1c0a195ec6af35ef0c50d662b174609b23" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:48.734000 audit[4664]: NETFILTER_CFG table=filter:70 family=2 entries=85 op=nft_register_chain pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:48.734000 audit[4664]: SYSCALL arch=c000003e syscall=46 success=yes exit=42388 a0=3 a1=7ffc5d03cb10 a2=0 a3=7ffc5d03cafc items=0 ppid=3971 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:48.835971 kernel: audit: type=1325 audit(1769565828.734:522): table=filter:70 family=2 entries=85 op=nft_register_chain pid=4664 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:03:48.836076 kernel: audit: type=1300 audit(1769565828.734:522): arch=c000003e syscall=46 success=yes exit=42388 a0=3 a1=7ffc5d03cb10 a2=0 a3=7ffc5d03cafc items=0 ppid=3971 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:48.734000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:46.071 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0 whisker-54df6f8c4d- calico-system 9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f 1416 0 2026-01-28 02:02:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54df6f8c4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 10.0.0.114 whisker-54df6f8c4d-bq29n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3acfc123f7e [] [] }} ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:46.102 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:46.615 [INFO][4555] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" HandleID="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Workload="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:46.618 [INFO][4555] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" HandleID="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Workload="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.114", "pod":"whisker-54df6f8c4d-bq29n", "timestamp":"2026-01-28 02:03:46.615213869 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:46.621 [INFO][4555] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:47.532 [INFO][4555] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:47.539 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:47.732 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:47.891 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:47.988 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.020 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.050 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.050 [INFO][4555] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.082 [INFO][4555] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03 Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.139 [INFO][4555] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.321 [INFO][4555] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.137/26] block=192.168.101.128/26 handle="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.383 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.137/26] handle="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" host="10.0.0.114" Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.383 [INFO][4555] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:03:48.847254 containerd[1601]: 2026-01-28 02:03:48.383 [INFO][4555] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.137/26] IPv6=[] ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" HandleID="k8s-pod-network.3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Workload="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.424 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0", GenerateName:"whisker-54df6f8c4d-", Namespace:"calico-system", SelfLink:"", UID:"9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54df6f8c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"whisker-54df6f8c4d-bq29n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3acfc123f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.425 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.137/32] ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.425 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3acfc123f7e ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.496 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.501 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0", GenerateName:"whisker-54df6f8c4d-", Namespace:"calico-system", SelfLink:"", UID:"9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54df6f8c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03", Pod:"whisker-54df6f8c4d-bq29n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3acfc123f7e", MAC:"be:e4:2b:de:e8:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:03:48.848308 containerd[1601]: 2026-01-28 02:03:48.783 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" Namespace="calico-system" Pod="whisker-54df6f8c4d-bq29n" WorkloadEndpoint="10.0.0.114-k8s-whisker--54df6f8c4d--bq29n-eth0" Jan 28 02:03:48.892692 kernel: audit: type=1327 audit(1769565828.734:522): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:03:48.910187 systemd[1]: Started cri-containerd-5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409.scope - libcontainer container 5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409. Jan 28 02:03:48.921011 kubelet[1960]: E0128 02:03:48.920970 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:49.039000 audit: BPF prog-id=171 op=LOAD Jan 28 02:03:49.045347 kernel: audit: type=1334 audit(1769565829.039:523): prog-id=171 op=LOAD Jan 28 02:03:49.046951 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:49.040000 audit: BPF prog-id=172 op=LOAD Jan 28 02:03:49.040000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000158238 a2=98 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.040000 audit: BPF prog-id=172 op=UNLOAD Jan 28 02:03:49.040000 audit[4651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.040000 audit: BPF prog-id=173 op=LOAD Jan 28 02:03:49.040000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000158488 a2=98 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.040000 audit: BPF prog-id=174 op=LOAD Jan 28 02:03:49.040000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000158218 a2=98 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.043000 audit: BPF prog-id=174 op=UNLOAD Jan 28 02:03:49.043000 audit[4651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.043000 audit: BPF prog-id=173 op=UNLOAD Jan 28 02:03:49.043000 audit[4651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.043000 audit: BPF prog-id=175 op=LOAD Jan 28 02:03:49.043000 audit[4651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001586e8 a2=98 a3=0 items=0 ppid=4641 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396662613166303731663066306331383863386163396437376364 Jan 28 02:03:49.077932 containerd[1601]: time="2026-01-28T02:03:49.077381390Z" level=info msg="connecting to shim 3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03" address="unix:///run/containerd/s/3fb8b25f0c3a63654f8314ccb7614350d1d1791964fc81f2dc467d23f8bae12e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:03:49.239971 containerd[1601]: time="2026-01-28T02:03:49.238983498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5zdgq,Uid:f4b6fba0-f381-4858-a71c-ba2619256e7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c9fba1f071f0f0c188c8ac9d77cd4902c9caa965575d1dcde428b2774996409\"" Jan 28 02:03:49.269315 systemd[1]: Started cri-containerd-3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03.scope - libcontainer container 3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03. Jan 28 02:03:49.335000 audit: BPF prog-id=176 op=LOAD Jan 28 02:03:49.337000 audit: BPF prog-id=177 op=LOAD Jan 28 02:03:49.337000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.337000 audit: BPF prog-id=177 op=UNLOAD Jan 28 02:03:49.337000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.344000 audit: BPF prog-id=178 op=LOAD Jan 28 02:03:49.344000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.344000 audit: BPF prog-id=179 op=LOAD Jan 28 02:03:49.344000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.345000 audit: BPF prog-id=179 op=UNLOAD Jan 28 02:03:49.345000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.345000 audit: BPF prog-id=178 op=UNLOAD Jan 28 02:03:49.345000 audit[4700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.345000 audit: BPF prog-id=180 op=LOAD Jan 28 02:03:49.345000 audit[4700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:49.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362323139663165376136616234313763336661613534373831383866 Jan 28 02:03:49.351409 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:03:49.532117 containerd[1601]: time="2026-01-28T02:03:49.531819984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54df6f8c4d-bq29n,Uid:9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b219f1e7a6ab417c3faa5478188f1d055e1429435d4a6fc450b0895cbdd9e03\"" Jan 28 02:03:49.869393 systemd-networkd[1507]: cali2da071532ca: Gained IPv6LL Jan 28 02:03:49.943360 kubelet[1960]: E0128 02:03:49.942769 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:50.129700 systemd-networkd[1507]: cali3acfc123f7e: Gained IPv6LL Jan 28 02:03:50.945471 kubelet[1960]: E0128 02:03:50.945039 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:51.955610 kubelet[1960]: E0128 02:03:51.955565 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:52.975302 kubelet[1960]: E0128 02:03:52.975252 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:53.165191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805442178.mount: Deactivated successfully. Jan 28 02:03:53.977749 kubelet[1960]: E0128 02:03:53.977432 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:54.988446 kubelet[1960]: E0128 02:03:54.984346 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:55.987548 kubelet[1960]: E0128 02:03:55.985044 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:56.994172 kubelet[1960]: E0128 02:03:56.991375 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:57.994060 kubelet[1960]: E0128 02:03:57.993996 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:58.893259 containerd[1601]: time="2026-01-28T02:03:58.891992050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:58.911978 containerd[1601]: time="2026-01-28T02:03:58.911796279Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63834580" Jan 28 02:03:58.931941 containerd[1601]: time="2026-01-28T02:03:58.931113602Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:58.961450 containerd[1601]: time="2026-01-28T02:03:58.961396717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:03:58.968963 containerd[1601]: time="2026-01-28T02:03:58.968823114Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 14.399149004s" Jan 28 02:03:58.969162 containerd[1601]: time="2026-01-28T02:03:58.969137052Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 28 02:03:58.978761 containerd[1601]: time="2026-01-28T02:03:58.978728168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:03:58.992493 containerd[1601]: time="2026-01-28T02:03:58.992447967Z" level=info msg="CreateContainer within sandbox \"801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 28 02:03:59.000657 kubelet[1960]: E0128 02:03:58.995344 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:03:59.077214 containerd[1601]: time="2026-01-28T02:03:59.077107306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:03:59.083432 containerd[1601]: time="2026-01-28T02:03:59.083179029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:03:59.083432 containerd[1601]: time="2026-01-28T02:03:59.083376611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 02:03:59.083651 kubelet[1960]: E0128 02:03:59.083609 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:03:59.084151 kubelet[1960]: E0128 02:03:59.083670 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:03:59.084656 kubelet[1960]: E0128 02:03:59.084600 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxqrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:03:59.085344 containerd[1601]: time="2026-01-28T02:03:59.085106240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 02:03:59.086415 containerd[1601]: time="2026-01-28T02:03:59.086156770Z" level=info msg="Container fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:03:59.087525 kubelet[1960]: E0128 02:03:59.086992 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:03:59.117097 containerd[1601]: time="2026-01-28T02:03:59.116915530Z" level=info msg="CreateContainer within sandbox \"801affd94abdc56194dc42ddf549a63fa34eec3a0736cd1321f6b10a91386f9d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd\"" Jan 28 02:03:59.123521 containerd[1601]: time="2026-01-28T02:03:59.121976052Z" level=info msg="StartContainer for \"fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd\"" Jan 28 02:03:59.123521 containerd[1601]: time="2026-01-28T02:03:59.123246493Z" level=info msg="connecting to shim fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd" address="unix:///run/containerd/s/a92fc5cb57384926ed838b215460989f0d9a5c8a989d022b96af53aaa9327c6d" protocol=ttrpc version=3 Jan 28 02:03:59.179318 systemd[1]: Started cri-containerd-fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd.scope - libcontainer container fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd. Jan 28 02:03:59.212000 audit: BPF prog-id=181 op=LOAD Jan 28 02:03:59.220987 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 28 02:03:59.221098 kernel: audit: type=1334 audit(1769565839.212:539): prog-id=181 op=LOAD Jan 28 02:03:59.214000 audit: BPF prog-id=182 op=LOAD Jan 28 02:03:59.233207 kernel: audit: type=1334 audit(1769565839.214:540): prog-id=182 op=LOAD Jan 28 02:03:59.214000 audit[4773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.254094 kernel: audit: type=1300 audit(1769565839.214:540): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.277327 kernel: audit: type=1327 audit(1769565839.214:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.214000 audit: BPF prog-id=182 op=UNLOAD Jan 28 02:03:59.214000 audit[4773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.302668 kernel: audit: type=1334 audit(1769565839.214:541): prog-id=182 op=UNLOAD Jan 28 02:03:59.302775 kernel: audit: type=1300 audit(1769565839.214:541): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.302950 kernel: audit: type=1327 audit(1769565839.214:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.317966 kernel: audit: type=1334 audit(1769565839.214:542): prog-id=183 op=LOAD Jan 28 02:03:59.214000 audit: BPF prog-id=183 op=LOAD Jan 28 02:03:59.323016 kernel: audit: type=1300 audit(1769565839.214:542): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.214000 audit[4773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.371592 kernel: audit: type=1327 audit(1769565839.214:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.214000 audit: BPF prog-id=184 op=LOAD Jan 28 02:03:59.214000 audit[4773]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.215000 audit: BPF prog-id=184 op=UNLOAD Jan 28 02:03:59.215000 audit[4773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.215000 audit: BPF prog-id=183 op=UNLOAD Jan 28 02:03:59.215000 audit[4773]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.215000 audit: BPF prog-id=185 op=LOAD Jan 28 02:03:59.215000 audit[4773]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:03:59.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313665646632663863353133616565643061643462613535386639 Jan 28 02:03:59.399909 containerd[1601]: time="2026-01-28T02:03:59.399128730Z" level=info msg="StartContainer for \"fd16edf2f8c513aeed0ad4ba558f921898c59d7570c6ac85e9e75ee3a456f3bd\" returns successfully" Jan 28 02:03:59.998063 kubelet[1960]: E0128 02:03:59.998001 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:00.202953 kubelet[1960]: I0128 02:04:00.202354 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-c7z7j" podStartSLOduration=64.530297615 podStartE2EDuration="1m19.202331004s" podCreationTimestamp="2026-01-28 02:02:41 +0000 UTC" firstStartedPulling="2026-01-28 02:03:44.304060375 +0000 UTC m=+185.315772081" lastFinishedPulling="2026-01-28 02:03:58.976093764 +0000 UTC m=+199.987805470" observedRunningTime="2026-01-28 02:04:00.130727561 +0000 UTC m=+201.142439287" watchObservedRunningTime="2026-01-28 02:04:00.202331004 +0000 UTC m=+201.214042720" Jan 28 02:04:00.340672 kubelet[1960]: E0128 02:04:00.340392 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:00.368323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276981148.mount: Deactivated successfully. Jan 28 02:04:01.001339 kubelet[1960]: E0128 02:04:01.000107 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:02.003142 kubelet[1960]: E0128 02:04:02.003041 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:03.012122 kubelet[1960]: E0128 02:04:03.012028 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:04.015396 kubelet[1960]: E0128 02:04:04.013149 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:04.468796 kubelet[1960]: E0128 02:04:04.468664 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:05.015347 kubelet[1960]: E0128 02:04:05.015165 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:06.020021 kubelet[1960]: E0128 02:04:06.018965 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:07.023193 kubelet[1960]: E0128 02:04:07.022628 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:07.777482 containerd[1601]: time="2026-01-28T02:04:07.777371893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:07.780112 containerd[1601]: time="2026-01-28T02:04:07.778148885Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18309378" Jan 28 02:04:07.783024 containerd[1601]: time="2026-01-28T02:04:07.782652385Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:07.791111 containerd[1601]: time="2026-01-28T02:04:07.791039208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:07.792409 containerd[1601]: time="2026-01-28T02:04:07.792373172Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 8.707189535s" Jan 28 02:04:07.792514 containerd[1601]: time="2026-01-28T02:04:07.792414126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 02:04:07.795251 containerd[1601]: time="2026-01-28T02:04:07.794350939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 02:04:07.795954 containerd[1601]: time="2026-01-28T02:04:07.795929898Z" level=info msg="CreateContainer within sandbox \"0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:04:07.828144 containerd[1601]: time="2026-01-28T02:04:07.826988612Z" level=info msg="Container 8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:04:07.847972 containerd[1601]: time="2026-01-28T02:04:07.846004739Z" level=info msg="CreateContainer within sandbox \"0b6bba08e7c88b5e51a502864e05cd6bfab2564aacea9e2a3a962c2e934bb82f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad\"" Jan 28 02:04:07.848384 containerd[1601]: time="2026-01-28T02:04:07.848130566Z" level=info msg="StartContainer for \"8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad\"" Jan 28 02:04:07.849716 containerd[1601]: time="2026-01-28T02:04:07.849628643Z" level=info msg="connecting to shim 8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad" address="unix:///run/containerd/s/6608dff3d6fe1644a32bd3a09440a25251de2c6f8759df1e7ce260ae8746a7a3" protocol=ttrpc version=3 Jan 28 02:04:07.942266 systemd[1]: Started cri-containerd-8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad.scope - libcontainer container 8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad. Jan 28 02:04:07.980000 audit: BPF prog-id=186 op=LOAD Jan 28 02:04:07.991081 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 02:04:07.991294 kernel: audit: type=1334 audit(1769565847.980:547): prog-id=186 op=LOAD Jan 28 02:04:07.985000 audit: BPF prog-id=187 op=LOAD Jan 28 02:04:08.006131 kernel: audit: type=1334 audit(1769565847.985:548): prog-id=187 op=LOAD Jan 28 02:04:08.006289 kernel: audit: type=1300 audit(1769565847.985:548): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.025092 kubelet[1960]: E0128 02:04:08.025042 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:08.044049 kernel: audit: type=1327 audit(1769565847.985:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: BPF prog-id=187 op=UNLOAD Jan 28 02:04:08.064613 kernel: audit: type=1334 audit(1769565847.985:549): prog-id=187 op=UNLOAD Jan 28 02:04:08.064833 kernel: audit: type=1300 audit(1769565847.985:549): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.076948 containerd[1601]: time="2026-01-28T02:04:08.075252935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:08.094040 kernel: audit: type=1327 audit(1769565847.985:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:08.102346 containerd[1601]: time="2026-01-28T02:04:08.101122868Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 28 02:04:08.115226 kernel: audit: type=1334 audit(1769565847.985:550): prog-id=188 op=LOAD Jan 28 02:04:07.985000 audit: BPF prog-id=188 op=LOAD Jan 28 02:04:08.115428 containerd[1601]: time="2026-01-28T02:04:08.115204796Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 320.101815ms" Jan 28 02:04:08.115428 containerd[1601]: time="2026-01-28T02:04:08.115297763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 02:04:08.120361 containerd[1601]: time="2026-01-28T02:04:08.120061409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.125080 containerd[1601]: time="2026-01-28T02:04:08.125049418Z" level=info msg="CreateContainer within sandbox \"f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:04:08.127461 containerd[1601]: time="2026-01-28T02:04:08.127331060Z" level=info msg="StartContainer for \"8158f140c6ec547d4da0abc3fef5e08109f7e0598f7b3aaade287fdb2908edad\" returns successfully" Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:08.157451 kernel: audit: type=1300 audit(1769565847.985:550): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.157585 kernel: audit: type=1327 audit(1769565847.985:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: BPF prog-id=189 op=LOAD Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: BPF prog-id=189 op=UNLOAD Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: BPF prog-id=188 op=UNLOAD Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:07.985000 audit: BPF prog-id=190 op=LOAD Jan 28 02:04:07.985000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4168 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:07.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831353866313430633665633534376434646130616263336665663565 Jan 28 02:04:08.200443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568962210.mount: Deactivated successfully. Jan 28 02:04:08.201749 containerd[1601]: time="2026-01-28T02:04:08.201707580Z" level=info msg="Container 2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:04:08.225803 containerd[1601]: time="2026-01-28T02:04:08.225757523Z" level=info msg="CreateContainer within sandbox \"f24b9f887e0bcbe8b27b44a52bd711ba984d54a9edee98395541bd6412ae70de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512\"" Jan 28 02:04:08.230764 containerd[1601]: time="2026-01-28T02:04:08.227259693Z" level=info msg="StartContainer for \"2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512\"" Jan 28 02:04:08.230764 containerd[1601]: time="2026-01-28T02:04:08.229104778Z" level=info msg="connecting to shim 2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512" address="unix:///run/containerd/s/d30d4fa81c08273cc71596cc163a72d8ef3aeffb412dce5825b0aaaf20c766d5" protocol=ttrpc version=3 Jan 28 02:04:08.243109 kubelet[1960]: E0128 02:04:08.240462 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:08.252982 containerd[1601]: time="2026-01-28T02:04:08.252941142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.260786 containerd[1601]: time="2026-01-28T02:04:08.260519922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:04:08.262093 containerd[1601]: time="2026-01-28T02:04:08.262065440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.263325 kubelet[1960]: E0128 02:04:08.263289 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:08.263445 kubelet[1960]: E0128 02:04:08.263424 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:08.266130 containerd[1601]: time="2026-01-28T02:04:08.266102745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:04:08.267296 kubelet[1960]: E0128 02:04:08.266232 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d497h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.268588 kubelet[1960]: E0128 02:04:08.268554 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:04:08.317381 systemd[1]: Started cri-containerd-2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512.scope - libcontainer container 2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512. Jan 28 02:04:08.342963 containerd[1601]: time="2026-01-28T02:04:08.342044720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.354735 containerd[1601]: time="2026-01-28T02:04:08.354616035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.354933 containerd[1601]: time="2026-01-28T02:04:08.354740999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:04:08.355303 kubelet[1960]: E0128 02:04:08.355200 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:04:08.355370 kubelet[1960]: E0128 02:04:08.355295 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:04:08.356452 kubelet[1960]: E0128 02:04:08.356022 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4fnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.356677 containerd[1601]: time="2026-01-28T02:04:08.356340522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:04:08.361000 audit: BPF prog-id=191 op=LOAD Jan 28 02:04:08.363000 audit: BPF prog-id=192 op=LOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=192 op=UNLOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=193 op=LOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=194 op=LOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=194 op=UNLOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=193 op=UNLOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.363000 audit: BPF prog-id=195 op=LOAD Jan 28 02:04:08.363000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4238 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236343465646436623532643062333237313864333865653234613230 Jan 28 02:04:08.370023 kubelet[1960]: E0128 02:04:08.364829 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:04:08.436671 containerd[1601]: time="2026-01-28T02:04:08.436614645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.441382 containerd[1601]: time="2026-01-28T02:04:08.441348262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.441474 containerd[1601]: time="2026-01-28T02:04:08.441421080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:04:08.443401 kubelet[1960]: E0128 02:04:08.442756 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:08.443763 kubelet[1960]: E0128 02:04:08.443532 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:08.447436 kubelet[1960]: E0128 02:04:08.446254 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nln7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.448040 kubelet[1960]: E0128 02:04:08.447618 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:04:08.453197 containerd[1601]: time="2026-01-28T02:04:08.450612772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:04:08.453197 containerd[1601]: time="2026-01-28T02:04:08.451989473Z" level=info msg="StartContainer for \"2644edd6b52d0b32718d38ee24a20942a674777c4951a091a5d71f3e5b995512\" returns successfully" Jan 28 02:04:08.463000 audit[4976]: NETFILTER_CFG table=filter:71 family=2 entries=20 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:08.463000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc018ea8f0 a2=0 a3=7ffc018ea8dc items=0 ppid=2260 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:08.477000 audit[4976]: NETFILTER_CFG table=nat:72 family=2 entries=14 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:08.477000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc018ea8f0 a2=0 a3=0 items=0 ppid=2260 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:08.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:08.545455 containerd[1601]: time="2026-01-28T02:04:08.544751614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.555901 containerd[1601]: time="2026-01-28T02:04:08.555651196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:04:08.555901 containerd[1601]: time="2026-01-28T02:04:08.555758259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.556669 kubelet[1960]: E0128 02:04:08.556487 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:04:08.556669 kubelet[1960]: E0128 02:04:08.556561 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:04:08.556987 kubelet[1960]: E0128 02:04:08.556811 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jld9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.557923 containerd[1601]: time="2026-01-28T02:04:08.557573746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:04:08.558441 kubelet[1960]: E0128 02:04:08.558307 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:04:08.633662 containerd[1601]: time="2026-01-28T02:04:08.631660926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.652321 containerd[1601]: time="2026-01-28T02:04:08.652259374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:04:08.652814 containerd[1601]: time="2026-01-28T02:04:08.652455226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.653727 kubelet[1960]: E0128 02:04:08.653034 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:08.653727 kubelet[1960]: E0128 02:04:08.653088 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:08.653727 kubelet[1960]: E0128 02:04:08.653429 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ee390cb8e04c4e1abe7adde8491b183a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.655515 containerd[1601]: time="2026-01-28T02:04:08.653956214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:04:08.726418 containerd[1601]: time="2026-01-28T02:04:08.725807789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.741942 containerd[1601]: time="2026-01-28T02:04:08.738036657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:04:08.741942 containerd[1601]: time="2026-01-28T02:04:08.738192758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.741942 containerd[1601]: time="2026-01-28T02:04:08.740174800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:04:08.742204 kubelet[1960]: E0128 02:04:08.738348 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:04:08.742204 kubelet[1960]: E0128 02:04:08.738398 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:04:08.742204 kubelet[1960]: E0128 02:04:08.741623 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxqrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.748387 kubelet[1960]: E0128 02:04:08.748205 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:04:08.821385 containerd[1601]: time="2026-01-28T02:04:08.821048751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:08.832542 containerd[1601]: time="2026-01-28T02:04:08.832181027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:04:08.832542 containerd[1601]: time="2026-01-28T02:04:08.832250083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:08.834191 kubelet[1960]: E0128 02:04:08.833297 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:08.834191 kubelet[1960]: E0128 02:04:08.833425 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:08.834191 kubelet[1960]: E0128 02:04:08.833545 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:08.835090 kubelet[1960]: E0128 02:04:08.834930 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:04:09.029675 kubelet[1960]: E0128 02:04:09.028799 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:09.257587 kubelet[1960]: E0128 02:04:09.257224 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:09.261333 kubelet[1960]: E0128 02:04:09.258526 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:09.270404 kubelet[1960]: E0128 02:04:09.270333 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:04:09.285239 kubelet[1960]: E0128 02:04:09.275011 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:04:09.285239 kubelet[1960]: E0128 02:04:09.282371 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:04:09.285239 kubelet[1960]: E0128 02:04:09.282995 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:04:09.285239 kubelet[1960]: E0128 02:04:09.284355 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:04:09.379391 kubelet[1960]: I0128 02:04:09.377775 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwxm9" podStartSLOduration=313.546406653 podStartE2EDuration="5m36.377751108s" podCreationTimestamp="2026-01-28 01:58:33 +0000 UTC" firstStartedPulling="2026-01-28 02:03:44.962313338 +0000 UTC m=+185.974025045" lastFinishedPulling="2026-01-28 02:04:07.793657794 +0000 UTC m=+208.805369500" observedRunningTime="2026-01-28 02:04:08.338417702 +0000 UTC m=+209.350129408" watchObservedRunningTime="2026-01-28 02:04:09.377751108 +0000 UTC m=+210.389462814" Jan 28 02:04:09.500000 audit[4988]: NETFILTER_CFG table=filter:73 family=2 entries=20 op=nft_register_rule pid=4988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:09.500000 audit[4988]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd38e94710 a2=0 a3=7ffd38e946fc items=0 ppid=2260 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:09.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:09.513000 audit[4988]: NETFILTER_CFG table=nat:74 family=2 entries=14 op=nft_register_rule pid=4988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:09.513000 audit[4988]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd38e94710 a2=0 a3=0 items=0 ppid=2260 pid=4988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:09.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:09.634000 audit[4990]: NETFILTER_CFG table=filter:75 family=2 entries=20 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:09.634000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf12a3e30 a2=0 a3=7ffcf12a3e1c items=0 ppid=2260 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:09.634000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:09.653000 audit[4990]: NETFILTER_CFG table=nat:76 family=2 entries=14 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:09.653000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf12a3e30 a2=0 a3=0 items=0 ppid=2260 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:09.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:09.742416 kubelet[1960]: I0128 02:04:09.742288 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t45sz" podStartSLOduration=313.721561125 podStartE2EDuration="5m36.742266697s" podCreationTimestamp="2026-01-28 01:58:33 +0000 UTC" firstStartedPulling="2026-01-28 02:03:45.09853354 +0000 UTC m=+186.110245245" lastFinishedPulling="2026-01-28 02:04:08.119239112 +0000 UTC m=+209.130950817" observedRunningTime="2026-01-28 02:04:09.742196225 +0000 UTC m=+210.753907981" watchObservedRunningTime="2026-01-28 02:04:09.742266697 +0000 UTC m=+210.753978402" Jan 28 02:04:10.032830 kubelet[1960]: E0128 02:04:10.032495 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:10.277129 kubelet[1960]: E0128 02:04:10.274461 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:10.277129 kubelet[1960]: E0128 02:04:10.274495 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:10.709000 audit[4992]: NETFILTER_CFG table=filter:77 family=2 entries=17 op=nft_register_rule pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:10.709000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1a257980 a2=0 a3=7ffe1a25796c items=0 ppid=2260 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:10.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:10.821000 audit[4992]: NETFILTER_CFG table=nat:78 family=2 entries=47 op=nft_register_chain pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:10.821000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe1a257980 a2=0 a3=7ffe1a25796c items=0 ppid=2260 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:10.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:11.034251 kubelet[1960]: E0128 02:04:11.033075 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:11.372058 kubelet[1960]: E0128 02:04:11.337302 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:12.037527 kubelet[1960]: E0128 02:04:12.037098 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:13.039499 kubelet[1960]: E0128 02:04:13.038170 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:13.594544 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 28 02:04:13.595731 kernel: audit: type=1325 audit(1769565853.573:571): table=filter:79 family=2 entries=26 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:13.573000 audit[4998]: NETFILTER_CFG table=filter:79 family=2 entries=26 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:13.620091 kernel: audit: type=1300 audit(1769565853.573:571): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe85051420 a2=0 a3=7ffe8505140c items=0 ppid=2260 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:13.573000 audit[4998]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe85051420 a2=0 a3=7ffe8505140c items=0 ppid=2260 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:13.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:13.698747 kernel: audit: type=1327 audit(1769565853.573:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:13.730000 audit[4998]: NETFILTER_CFG table=nat:80 family=2 entries=20 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:13.730000 audit[4998]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe85051420 a2=0 a3=0 items=0 ppid=2260 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:13.818766 kernel: audit: type=1325 audit(1769565853.730:572): table=nat:80 family=2 entries=20 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:13.824381 kernel: audit: type=1300 audit(1769565853.730:572): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe85051420 a2=0 a3=0 items=0 ppid=2260 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:13.825103 kernel: audit: type=1327 audit(1769565853.730:572): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:13.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:13.879714 systemd[1]: Created slice kubepods-besteffort-pod1b296ad7_6efb_492c_b55e_7aa6cb72f8ea.slice - libcontainer container kubepods-besteffort-pod1b296ad7_6efb_492c_b55e_7aa6cb72f8ea.slice. Jan 28 02:04:13.881509 kubelet[1960]: I0128 02:04:13.879991 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1b296ad7-6efb-492c-b55e-7aa6cb72f8ea-data\") pod \"nfs-server-provisioner-0\" (UID: \"1b296ad7-6efb-492c-b55e-7aa6cb72f8ea\") " pod="default/nfs-server-provisioner-0" Jan 28 02:04:13.881509 kubelet[1960]: I0128 02:04:13.880129 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsvz\" (UniqueName: \"kubernetes.io/projected/1b296ad7-6efb-492c-b55e-7aa6cb72f8ea-kube-api-access-4jsvz\") pod \"nfs-server-provisioner-0\" (UID: \"1b296ad7-6efb-492c-b55e-7aa6cb72f8ea\") " pod="default/nfs-server-provisioner-0" Jan 28 02:04:13.938215 kubelet[1960]: E0128 02:04:13.936391 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:04:14.079216 kubelet[1960]: E0128 02:04:14.077504 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:14.131835 kernel: audit: type=1325 audit(1769565854.108:573): table=filter:81 family=2 entries=38 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:14.135244 kernel: audit: type=1300 audit(1769565854.108:573): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffea3cbba30 a2=0 a3=7ffea3cbba1c items=0 ppid=2260 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:14.108000 audit[5000]: NETFILTER_CFG table=filter:81 family=2 entries=38 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:14.108000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffea3cbba30 a2=0 a3=7ffea3cbba1c items=0 ppid=2260 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:14.215057 kernel: audit: type=1327 audit(1769565854.108:573): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:14.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:14.315000 audit[5000]: NETFILTER_CFG table=nat:82 family=2 entries=20 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:14.341942 kernel: audit: type=1325 audit(1769565854.315:574): table=nat:82 family=2 entries=20 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:14.315000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea3cbba30 a2=0 a3=0 items=0 ppid=2260 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:14.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:14.895391 containerd[1601]: time="2026-01-28T02:04:14.888671850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1b296ad7-6efb-492c-b55e-7aa6cb72f8ea,Namespace:default,Attempt:0,}" Jan 28 02:04:15.087021 kubelet[1960]: E0128 02:04:15.079391 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:16.087392 kubelet[1960]: E0128 02:04:16.086294 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:17.099047 kubelet[1960]: E0128 02:04:17.090316 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:17.293462 systemd-networkd[1507]: cali60e51b789ff: Link UP Jan 28 02:04:17.304959 systemd-networkd[1507]: cali60e51b789ff: Gained carrier Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.019 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 1b296ad7-6efb-492c-b55e-7aa6cb72f8ea 1791 0 2026-01-28 02:04:13 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.114 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.021 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.302 [INFO][5018] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" HandleID="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.303 [INFO][5018] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" HandleID="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e170), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-28 02:04:16.302822167 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.304 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.304 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.304 [INFO][5018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.386 [INFO][5018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.526 [INFO][5018] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.615 [INFO][5018] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.679 [INFO][5018] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.711 [INFO][5018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.716 [INFO][5018] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.774 [INFO][5018] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559 Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.874 [INFO][5018] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.990 [INFO][5018] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.138/26] block=192.168.101.128/26 handle="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.993 [INFO][5018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.138/26] handle="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" host="10.0.0.114" Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.993 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:04:17.822467 containerd[1601]: 2026-01-28 02:04:16.993 [INFO][5018] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.138/26] IPv6=[] ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" HandleID="k8s-pod-network.cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.824446 containerd[1601]: 2026-01-28 02:04:17.033 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1b296ad7-6efb-492c-b55e-7aa6cb72f8ea", ResourceVersion:"1791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:04:17.824446 containerd[1601]: 2026-01-28 02:04:17.080 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.138/32] ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.824446 containerd[1601]: 2026-01-28 02:04:17.098 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.824446 containerd[1601]: 2026-01-28 02:04:17.483 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.827076 containerd[1601]: 2026-01-28 02:04:17.649 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1b296ad7-6efb-492c-b55e-7aa6cb72f8ea", ResourceVersion:"1791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 4, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"86:62:ef:ef:39:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:04:17.827076 containerd[1601]: 2026-01-28 02:04:17.811 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Jan 28 02:04:17.897000 audit[5036]: NETFILTER_CFG table=filter:83 family=2 entries=74 op=nft_register_chain pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:04:17.897000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=31924 a0=3 a1=7ffed9bce640 a2=0 a3=7ffed9bce62c items=0 ppid=3971 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:17.897000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:04:17.944201 containerd[1601]: time="2026-01-28T02:04:17.944060946Z" level=info msg="connecting to shim cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559" address="unix:///run/containerd/s/7b7fe3b5b5419c0af202961ea95be2afb5ceb12847021fd11bf4ddbc40d6f779" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:04:18.091806 systemd[1]: Started cri-containerd-cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559.scope - libcontainer container cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559. Jan 28 02:04:18.099538 kubelet[1960]: E0128 02:04:18.095195 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:18.192000 audit: BPF prog-id=196 op=LOAD Jan 28 02:04:18.194000 audit: BPF prog-id=197 op=LOAD Jan 28 02:04:18.194000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.194000 audit: BPF prog-id=197 op=UNLOAD Jan 28 02:04:18.194000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.194000 audit: BPF prog-id=198 op=LOAD Jan 28 02:04:18.194000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.195000 audit: BPF prog-id=199 op=LOAD Jan 28 02:04:18.195000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.195000 audit: BPF prog-id=199 op=UNLOAD Jan 28 02:04:18.195000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.195000 audit: BPF prog-id=198 op=UNLOAD Jan 28 02:04:18.195000 audit[5056]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.195000 audit: BPF prog-id=200 op=LOAD Jan 28 02:04:18.195000 audit[5056]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5046 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:18.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364353365653265343365333539353234366331373135346661396662 Jan 28 02:04:18.200917 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:04:18.317919 containerd[1601]: time="2026-01-28T02:04:18.317646756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1b296ad7-6efb-492c-b55e-7aa6cb72f8ea,Namespace:default,Attempt:0,} returns sandbox id \"cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559\"" Jan 28 02:04:18.329731 containerd[1601]: time="2026-01-28T02:04:18.329440931Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 28 02:04:18.477178 systemd-networkd[1507]: cali60e51b789ff: Gained IPv6LL Jan 28 02:04:19.097381 kubelet[1960]: E0128 02:04:19.096681 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:20.101980 kubelet[1960]: E0128 02:04:20.101828 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:20.338934 kubelet[1960]: E0128 02:04:20.338826 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:21.104244 kubelet[1960]: E0128 02:04:21.104097 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:22.106040 kubelet[1960]: E0128 02:04:22.105711 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:23.110323 kubelet[1960]: E0128 02:04:23.110020 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:24.111352 kubelet[1960]: E0128 02:04:24.111254 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:25.114343 kubelet[1960]: E0128 02:04:25.114171 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:25.634787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226235254.mount: Deactivated successfully. Jan 28 02:04:26.117563 kubelet[1960]: E0128 02:04:26.117496 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:27.122061 kubelet[1960]: E0128 02:04:27.121967 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:28.124657 kubelet[1960]: E0128 02:04:28.123036 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:29.126820 kubelet[1960]: E0128 02:04:29.126392 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:30.128671 kubelet[1960]: E0128 02:04:30.127563 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:31.131220 kubelet[1960]: E0128 02:04:31.129258 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:32.130127 kubelet[1960]: E0128 02:04:32.130060 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:33.134680 kubelet[1960]: E0128 02:04:33.134627 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:34.139284 kubelet[1960]: E0128 02:04:34.138214 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:34.479058 containerd[1601]: time="2026-01-28T02:04:34.476719505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:34.495583 containerd[1601]: time="2026-01-28T02:04:34.495088851Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=89930520" Jan 28 02:04:34.507338 containerd[1601]: time="2026-01-28T02:04:34.504519558Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:34.512318 containerd[1601]: time="2026-01-28T02:04:34.512208842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:34.515046 containerd[1601]: time="2026-01-28T02:04:34.513541067Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 16.184054862s" Jan 28 02:04:34.515046 containerd[1601]: time="2026-01-28T02:04:34.513626313Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 28 02:04:34.520341 containerd[1601]: time="2026-01-28T02:04:34.520089718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:04:34.522101 containerd[1601]: time="2026-01-28T02:04:34.522068068Z" level=info msg="CreateContainer within sandbox \"cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 28 02:04:34.596827 containerd[1601]: time="2026-01-28T02:04:34.596776662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:34.607527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789319719.mount: Deactivated successfully. Jan 28 02:04:34.610061 containerd[1601]: time="2026-01-28T02:04:34.608900551Z" level=info msg="Container c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:04:34.613257 containerd[1601]: time="2026-01-28T02:04:34.612803651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:04:34.613431 containerd[1601]: time="2026-01-28T02:04:34.613355674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:34.614990 kubelet[1960]: E0128 02:04:34.614508 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:34.614990 kubelet[1960]: E0128 02:04:34.614581 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:34.616779 kubelet[1960]: E0128 02:04:34.615176 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nln7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:34.617553 containerd[1601]: time="2026-01-28T02:04:34.616201723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:04:34.623062 kubelet[1960]: E0128 02:04:34.620567 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:04:34.653751 containerd[1601]: time="2026-01-28T02:04:34.652242753Z" level=info msg="CreateContainer within sandbox \"cd53ee2e43e3595246c17154fa9fb5a238ee2c8b4a4877ae544d085aa7856559\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a\"" Jan 28 02:04:34.653751 containerd[1601]: time="2026-01-28T02:04:34.653442133Z" level=info msg="StartContainer for \"c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a\"" Jan 28 02:04:34.660733 containerd[1601]: time="2026-01-28T02:04:34.659026206Z" level=info msg="connecting to shim c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a" address="unix:///run/containerd/s/7b7fe3b5b5419c0af202961ea95be2afb5ceb12847021fd11bf4ddbc40d6f779" protocol=ttrpc version=3 Jan 28 02:04:34.741741 containerd[1601]: time="2026-01-28T02:04:34.741385643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:34.748825 containerd[1601]: time="2026-01-28T02:04:34.748788521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:04:34.749138 containerd[1601]: time="2026-01-28T02:04:34.749113829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:34.750443 kubelet[1960]: E0128 02:04:34.750329 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:04:34.750500 kubelet[1960]: E0128 02:04:34.750438 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:04:34.751375 kubelet[1960]: E0128 02:04:34.751137 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxqrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:34.752313 containerd[1601]: time="2026-01-28T02:04:34.751825446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:04:34.760253 kubelet[1960]: E0128 02:04:34.760007 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:04:34.791377 systemd[1]: Started cri-containerd-c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a.scope - libcontainer container c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a. Jan 28 02:04:34.840775 containerd[1601]: time="2026-01-28T02:04:34.836116835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:34.846601 containerd[1601]: time="2026-01-28T02:04:34.846507715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:04:34.846804 containerd[1601]: time="2026-01-28T02:04:34.846718351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:34.847244 kubelet[1960]: E0128 02:04:34.847088 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:34.847244 kubelet[1960]: E0128 02:04:34.847149 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:34.848742 kubelet[1960]: E0128 02:04:34.847401 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d497h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6656f8f9d9-spnd9_calico-apiserver(67521aee-68dc-4703-af3e-6a8c6df60cd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:34.848742 kubelet[1960]: E0128 02:04:34.848502 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:04:34.859189 containerd[1601]: time="2026-01-28T02:04:34.848193115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:04:34.858000 audit: BPF prog-id=201 op=LOAD Jan 28 02:04:34.868323 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 02:04:34.868423 kernel: audit: type=1334 audit(1769565874.858:584): prog-id=201 op=LOAD Jan 28 02:04:34.861000 audit: BPF prog-id=202 op=LOAD Jan 28 02:04:34.884428 kernel: audit: type=1334 audit(1769565874.861:585): prog-id=202 op=LOAD Jan 28 02:04:34.884529 kernel: audit: type=1300 audit(1769565874.861:585): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.861000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.944476 containerd[1601]: time="2026-01-28T02:04:34.944265373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:34.948047 containerd[1601]: time="2026-01-28T02:04:34.947662484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:04:34.948144 containerd[1601]: time="2026-01-28T02:04:34.948132005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:34.948616 kubelet[1960]: E0128 02:04:34.948569 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:04:34.948766 kubelet[1960]: E0128 02:04:34.948727 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:04:34.949029 kernel: audit: type=1327 audit(1769565874.861:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.949515 kubelet[1960]: E0128 02:04:34.949454 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jld9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:34.954493 kubelet[1960]: E0128 02:04:34.954366 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:04:34.960977 containerd[1601]: time="2026-01-28T02:04:34.959485161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:04:34.863000 audit: BPF prog-id=202 op=UNLOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:35.011973 kernel: audit: type=1334 audit(1769565874.863:586): prog-id=202 op=UNLOAD Jan 28 02:04:35.012117 kernel: audit: type=1300 audit(1769565874.863:586): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:35.012163 kernel: audit: type=1327 audit(1769565874.863:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: BPF prog-id=203 op=LOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:35.089076 kernel: audit: type=1334 audit(1769565874.863:587): prog-id=203 op=LOAD Jan 28 02:04:35.089159 kernel: audit: type=1300 audit(1769565874.863:587): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:35.097498 containerd[1601]: time="2026-01-28T02:04:35.097307270Z" level=info msg="StartContainer for \"c46faa503906b580c921c9472b0bf35f465376cdc5ac79730c0fe8611592934a\" returns successfully" Jan 28 02:04:35.114626 kernel: audit: type=1327 audit(1769565874.863:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: BPF prog-id=204 op=LOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: BPF prog-id=204 op=UNLOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: BPF prog-id=203 op=UNLOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:34.863000 audit: BPF prog-id=205 op=LOAD Jan 28 02:04:34.863000 audit[5153]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5046 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:34.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334366661613530333930366235383063393231633934373262306266 Jan 28 02:04:35.130897 containerd[1601]: time="2026-01-28T02:04:35.130183369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:35.135673 containerd[1601]: time="2026-01-28T02:04:35.134228855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:04:35.135673 containerd[1601]: time="2026-01-28T02:04:35.134329639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:35.137247 kubelet[1960]: E0128 02:04:35.136904 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:04:35.137247 kubelet[1960]: E0128 02:04:35.137035 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:04:35.138052 kubelet[1960]: E0128 02:04:35.137372 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4fnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78fc6b544-rfcfq_calico-system(9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:35.138313 kubelet[1960]: E0128 02:04:35.138293 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:35.138610 containerd[1601]: time="2026-01-28T02:04:35.138536952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:04:35.139741 kubelet[1960]: E0128 02:04:35.139529 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:04:35.216135 containerd[1601]: time="2026-01-28T02:04:35.215449314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:35.226240 containerd[1601]: time="2026-01-28T02:04:35.224782321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:04:35.226240 containerd[1601]: time="2026-01-28T02:04:35.225028333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:35.228069 kubelet[1960]: E0128 02:04:35.227169 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:35.228069 kubelet[1960]: E0128 02:04:35.227230 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:35.229317 kubelet[1960]: E0128 02:04:35.228995 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ee390cb8e04c4e1abe7adde8491b183a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:35.240433 containerd[1601]: time="2026-01-28T02:04:35.239378346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:04:35.323726 containerd[1601]: time="2026-01-28T02:04:35.323303378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:35.345307 containerd[1601]: time="2026-01-28T02:04:35.339606454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:35.345307 containerd[1601]: time="2026-01-28T02:04:35.342472554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:04:35.345580 kubelet[1960]: E0128 02:04:35.342968 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:35.345580 kubelet[1960]: E0128 02:04:35.343022 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:35.345580 kubelet[1960]: E0128 02:04:35.343137 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:35.345580 kubelet[1960]: E0128 02:04:35.345528 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:04:36.139007 kubelet[1960]: E0128 02:04:36.138678 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:36.243000 audit[5219]: NETFILTER_CFG table=filter:84 family=2 entries=26 op=nft_register_rule pid=5219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:36.243000 audit[5219]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc564e650 a2=0 a3=7ffdc564e63c items=0 ppid=2260 pid=5219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:36.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:36.260000 audit[5219]: NETFILTER_CFG table=nat:85 family=2 entries=104 op=nft_register_chain pid=5219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 02:04:36.260000 audit[5219]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdc564e650 a2=0 a3=7ffdc564e63c items=0 ppid=2260 pid=5219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:36.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 02:04:37.143068 kubelet[1960]: E0128 02:04:37.142502 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:38.146218 kubelet[1960]: E0128 02:04:38.145010 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:39.148013 kubelet[1960]: E0128 02:04:39.147326 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:40.148356 kubelet[1960]: E0128 02:04:40.148150 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:40.343952 kubelet[1960]: E0128 02:04:40.341759 1960 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:41.161541 kubelet[1960]: E0128 02:04:41.154749 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:42.157448 kubelet[1960]: E0128 02:04:42.157384 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:43.162435 kubelet[1960]: E0128 02:04:43.161026 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:44.165109 kubelet[1960]: E0128 02:04:44.164224 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:44.757176 kubelet[1960]: I0128 02:04:44.757003 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=15.565898877 podStartE2EDuration="31.756978106s" podCreationTimestamp="2026-01-28 02:04:13 +0000 UTC" firstStartedPulling="2026-01-28 02:04:18.326291531 +0000 UTC m=+219.338003238" lastFinishedPulling="2026-01-28 02:04:34.517370761 +0000 UTC m=+235.529082467" observedRunningTime="2026-01-28 02:04:36.108261201 +0000 UTC m=+237.119972906" watchObservedRunningTime="2026-01-28 02:04:44.756978106 +0000 UTC m=+245.768689822" Jan 28 02:04:44.782334 systemd[1]: Created slice kubepods-besteffort-pod4f2e45a8_08ab_4745_93a3_92dec41a0b61.slice - libcontainer container kubepods-besteffort-pod4f2e45a8_08ab_4745_93a3_92dec41a0b61.slice. Jan 28 02:04:44.964543 kubelet[1960]: I0128 02:04:44.963583 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79007e76-8bc8-40a2-a1fc-3ee8a698617a\" (UniqueName: \"kubernetes.io/nfs/4f2e45a8-08ab-4745-93a3-92dec41a0b61-pvc-79007e76-8bc8-40a2-a1fc-3ee8a698617a\") pod \"test-pod-1\" (UID: \"4f2e45a8-08ab-4745-93a3-92dec41a0b61\") " pod="default/test-pod-1" Jan 28 02:04:44.964543 kubelet[1960]: I0128 02:04:44.963670 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpxj\" (UniqueName: \"kubernetes.io/projected/4f2e45a8-08ab-4745-93a3-92dec41a0b61-kube-api-access-8gpxj\") pod \"test-pod-1\" (UID: \"4f2e45a8-08ab-4745-93a3-92dec41a0b61\") " pod="default/test-pod-1" Jan 28 02:04:45.172826 kubelet[1960]: E0128 02:04:45.165558 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:45.847908 kernel: netfs: FS-Cache loaded Jan 28 02:04:46.169194 kubelet[1960]: E0128 02:04:46.169034 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:46.738362 kernel: RPC: Registered named UNIX socket transport module. Jan 28 02:04:46.738506 kernel: RPC: Registered udp transport module. Jan 28 02:04:46.738545 kernel: RPC: Registered tcp transport module. Jan 28 02:04:46.738581 kernel: RPC: Registered tcp-with-tls transport module. Jan 28 02:04:46.738622 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 28 02:04:46.928516 kubelet[1960]: E0128 02:04:46.928435 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:04:46.933063 kubelet[1960]: E0128 02:04:46.930527 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:04:47.170278 kubelet[1960]: E0128 02:04:47.170147 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:47.919948 kubelet[1960]: E0128 02:04:47.919133 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e" Jan 28 02:04:48.170831 kubelet[1960]: E0128 02:04:48.170566 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:48.194713 kernel: NFS: Registering the id_resolver key type Jan 28 02:04:48.194991 kernel: Key type id_resolver registered Jan 28 02:04:48.195036 kernel: Key type id_legacy registered Jan 28 02:04:48.665555 nfsidmap[5251]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 28 02:04:48.672146 nfsidmap[5251]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 28 02:04:48.712201 nfsidmap[5254]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 28 02:04:48.712574 nfsidmap[5254]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 28 02:04:48.802492 nfsrahead[5258]: setting /var/lib/kubelet/pods/4f2e45a8-08ab-4745-93a3-92dec41a0b61/volumes/kubernetes.io~nfs/pvc-79007e76-8bc8-40a2-a1fc-3ee8a698617a readahead to 128 Jan 28 02:04:48.924017 kubelet[1960]: E0128 02:04:48.922019 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-spnd9" podUID="67521aee-68dc-4703-af3e-6a8c6df60cd8" Jan 28 02:04:48.924017 kubelet[1960]: E0128 02:04:48.922771 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78fc6b544-rfcfq" podUID="9a7cf4fa-e7b1-45e3-92d2-5754fd7693cc" Jan 28 02:04:48.924378 containerd[1601]: time="2026-01-28T02:04:48.923226983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:04:49.015546 containerd[1601]: time="2026-01-28T02:04:49.015318660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:49.019706 containerd[1601]: time="2026-01-28T02:04:49.019597785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:04:49.020097 containerd[1601]: time="2026-01-28T02:04:49.020069798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:49.020475 kubelet[1960]: E0128 02:04:49.020430 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:04:49.020937 kubelet[1960]: E0128 02:04:49.020582 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:04:49.020937 kubelet[1960]: E0128 02:04:49.020787 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxqrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-krgpk_calico-system(15b582de-4a9d-49bf-b8af-da9b7c0dc36f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:49.021177 containerd[1601]: time="2026-01-28T02:04:49.021095385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4f2e45a8-08ab-4745-93a3-92dec41a0b61,Namespace:default,Attempt:0,}" Jan 28 02:04:49.039060 kubelet[1960]: E0128 02:04:49.038723 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-krgpk" podUID="15b582de-4a9d-49bf-b8af-da9b7c0dc36f" Jan 28 02:04:49.186806 kubelet[1960]: E0128 02:04:49.185396 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:50.187595 kubelet[1960]: E0128 02:04:50.187391 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:50.520984 systemd-networkd[1507]: cali5ec59c6bf6e: Link UP Jan 28 02:04:50.531540 systemd-networkd[1507]: cali5ec59c6bf6e: Gained carrier Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.470 [INFO][5259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-test--pod--1-eth0 default 4f2e45a8-08ab-4745-93a3-92dec41a0b61 1924 0 2026-01-28 02:04:15 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.114 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.470 [INFO][5259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.820 [INFO][5274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" HandleID="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Workload="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.821 [INFO][5274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" HandleID="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Workload="10.0.0.114-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f190), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"test-pod-1", "timestamp":"2026-01-28 02:04:49.820275073 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.821 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.821 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.821 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:49.939 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.039 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.118 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.150 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.233 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.233 [INFO][5274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.246 [INFO][5274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107 Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.335 [INFO][5274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.444 [INFO][5274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.139/26] block=192.168.101.128/26 handle="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.444 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.139/26] handle="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" host="10.0.0.114" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.444 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.444 [INFO][5274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.139/26] IPv6=[] ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" HandleID="k8s-pod-network.5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Workload="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.683506 containerd[1601]: 2026-01-28 02:04:50.494 [INFO][5259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4f2e45a8-08ab-4745-93a3-92dec41a0b61", ResourceVersion:"1924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:04:50.704378 containerd[1601]: 2026-01-28 02:04:50.494 [INFO][5259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.139/32] ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.704378 containerd[1601]: 2026-01-28 02:04:50.494 [INFO][5259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.704378 containerd[1601]: 2026-01-28 02:04:50.542 [INFO][5259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.704378 containerd[1601]: 2026-01-28 02:04:50.564 [INFO][5259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4f2e45a8-08ab-4745-93a3-92dec41a0b61", ResourceVersion:"1924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.139/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b2:33:1b:aa:c3:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:04:50.704378 containerd[1601]: 2026-01-28 02:04:50.651 [INFO][5259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Jan 28 02:04:50.769000 audit[5288]: NETFILTER_CFG table=filter:86 family=2 entries=64 op=nft_register_chain pid=5288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:04:50.787320 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 28 02:04:50.787453 kernel: audit: type=1325 audit(1769565890.769:594): table=filter:86 family=2 entries=64 op=nft_register_chain pid=5288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 02:04:50.769000 audit[5288]: SYSCALL arch=c000003e syscall=46 success=yes exit=27448 a0=3 a1=7ffc108270d0 a2=0 a3=7ffc108270bc items=0 ppid=3971 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:50.862664 containerd[1601]: time="2026-01-28T02:04:50.861434677Z" level=info msg="connecting to shim 5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107" address="unix:///run/containerd/s/7b44653a081f2942d285fe4b4dc03be4eed1c847063b2c79125dcf855e5ddea6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 02:04:50.769000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:04:50.886373 kernel: audit: type=1300 audit(1769565890.769:594): arch=c000003e syscall=46 success=yes exit=27448 a0=3 a1=7ffc108270d0 a2=0 a3=7ffc108270bc items=0 ppid=3971 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:50.886540 kernel: audit: type=1327 audit(1769565890.769:594): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 02:04:51.067524 systemd[1]: Started cri-containerd-5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107.scope - libcontainer container 5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107. Jan 28 02:04:51.128000 audit: BPF prog-id=206 op=LOAD Jan 28 02:04:51.133000 audit: BPF prog-id=207 op=LOAD Jan 28 02:04:51.140544 kernel: audit: type=1334 audit(1769565891.128:595): prog-id=206 op=LOAD Jan 28 02:04:51.140692 kernel: audit: type=1334 audit(1769565891.133:596): prog-id=207 op=LOAD Jan 28 02:04:51.144512 kernel: audit: type=1300 audit(1769565891.133:596): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.141969 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 02:04:51.173526 kernel: audit: type=1327 audit(1769565891.133:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.192490 kubelet[1960]: E0128 02:04:51.191678 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:51.133000 audit: BPF prog-id=207 op=UNLOAD Jan 28 02:04:51.202372 kernel: audit: type=1334 audit(1769565891.133:597): prog-id=207 op=UNLOAD Jan 28 02:04:51.202466 kernel: audit: type=1300 audit(1769565891.133:597): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.229932 kernel: audit: type=1327 audit(1769565891.133:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: BPF prog-id=208 op=LOAD Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: BPF prog-id=209 op=LOAD Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: BPF prog-id=209 op=UNLOAD Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: BPF prog-id=208 op=UNLOAD Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.133000 audit: BPF prog-id=210 op=LOAD Jan 28 02:04:51.133000 audit[5307]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5296 pid=5307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564386337613235333766663830663965626339653834333565343464 Jan 28 02:04:51.329453 containerd[1601]: time="2026-01-28T02:04:51.329238668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4f2e45a8-08ab-4745-93a3-92dec41a0b61,Namespace:default,Attempt:0,} returns sandbox id \"5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107\"" Jan 28 02:04:51.344927 containerd[1601]: time="2026-01-28T02:04:51.344399273Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 28 02:04:51.550171 containerd[1601]: time="2026-01-28T02:04:51.550071939Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:04:51.556308 containerd[1601]: time="2026-01-28T02:04:51.556189255Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=0" Jan 28 02:04:51.566754 containerd[1601]: time="2026-01-28T02:04:51.566580828Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 216.395382ms" Jan 28 02:04:51.566754 containerd[1601]: time="2026-01-28T02:04:51.566693726Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 28 02:04:51.570178 containerd[1601]: time="2026-01-28T02:04:51.570057014Z" level=info msg="CreateContainer within sandbox \"5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 28 02:04:51.600767 containerd[1601]: time="2026-01-28T02:04:51.600537400Z" level=info msg="Container 7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58: CDI devices from CRI Config.CDIDevices: []" Jan 28 02:04:51.632379 containerd[1601]: time="2026-01-28T02:04:51.632213492Z" level=info msg="CreateContainer within sandbox \"5d8c7a2537ff80f9ebc9e8435e44d69da4db4bc27323fb8bcfe7f65679223107\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58\"" Jan 28 02:04:51.635216 containerd[1601]: time="2026-01-28T02:04:51.635131497Z" level=info msg="StartContainer for \"7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58\"" Jan 28 02:04:51.638367 containerd[1601]: time="2026-01-28T02:04:51.638275348Z" level=info msg="connecting to shim 7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58" address="unix:///run/containerd/s/7b44653a081f2942d285fe4b4dc03be4eed1c847063b2c79125dcf855e5ddea6" protocol=ttrpc version=3 Jan 28 02:04:51.701363 systemd[1]: Started cri-containerd-7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58.scope - libcontainer container 7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58. Jan 28 02:04:51.742000 audit: BPF prog-id=211 op=LOAD Jan 28 02:04:51.748000 audit: BPF prog-id=212 op=LOAD Jan 28 02:04:51.748000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.748000 audit: BPF prog-id=212 op=UNLOAD Jan 28 02:04:51.748000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.749000 audit: BPF prog-id=213 op=LOAD Jan 28 02:04:51.749000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.749000 audit: BPF prog-id=214 op=LOAD Jan 28 02:04:51.749000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.749000 audit: BPF prog-id=214 op=UNLOAD Jan 28 02:04:51.749000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.750000 audit: BPF prog-id=213 op=UNLOAD Jan 28 02:04:51.750000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.750000 audit: BPF prog-id=215 op=LOAD Jan 28 02:04:51.750000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5296 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 02:04:51.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738393964653363626338356339653863663132383562653732366533 Jan 28 02:04:51.869317 containerd[1601]: time="2026-01-28T02:04:51.869156850Z" level=info msg="StartContainer for \"7899de3cbc85c9e8cf1285be726e328b970166461124b08612bdc13889175e58\" returns successfully" Jan 28 02:04:51.947166 systemd-networkd[1507]: cali5ec59c6bf6e: Gained IPv6LL Jan 28 02:04:52.197058 kubelet[1960]: E0128 02:04:52.196363 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:52.330543 kubelet[1960]: I0128 02:04:52.329433 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=37.104222061 podStartE2EDuration="37.329418331s" podCreationTimestamp="2026-01-28 02:04:15 +0000 UTC" firstStartedPulling="2026-01-28 02:04:51.342787003 +0000 UTC m=+252.354498708" lastFinishedPulling="2026-01-28 02:04:51.567983262 +0000 UTC m=+252.579694978" observedRunningTime="2026-01-28 02:04:52.327383161 +0000 UTC m=+253.339094878" watchObservedRunningTime="2026-01-28 02:04:52.329418331 +0000 UTC m=+253.341130037" Jan 28 02:04:53.197996 kubelet[1960]: E0128 02:04:53.197933 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:54.200225 kubelet[1960]: E0128 02:04:54.200019 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:55.202272 kubelet[1960]: E0128 02:04:55.201439 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:56.204428 kubelet[1960]: E0128 02:04:56.202701 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:57.206013 kubelet[1960]: E0128 02:04:57.203471 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:58.211132 kubelet[1960]: E0128 02:04:58.208130 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:58.938788 containerd[1601]: time="2026-01-28T02:04:58.934059479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:04:59.041462 containerd[1601]: time="2026-01-28T02:04:59.039820627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:59.057477 containerd[1601]: time="2026-01-28T02:04:59.057275414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:04:59.057477 containerd[1601]: time="2026-01-28T02:04:59.057432166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:59.065761 kubelet[1960]: E0128 02:04:59.065037 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:59.065761 kubelet[1960]: E0128 02:04:59.065101 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:04:59.065761 kubelet[1960]: E0128 02:04:59.065458 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nln7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6656f8f9d9-6mpkc_calico-apiserver(5a2efbc6-3a74-40a5-b192-41e159a7237c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:59.066532 containerd[1601]: time="2026-01-28T02:04:59.066302737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:04:59.067065 kubelet[1960]: E0128 02:04:59.066759 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6656f8f9d9-6mpkc" podUID="5a2efbc6-3a74-40a5-b192-41e159a7237c" Jan 28 02:04:59.160221 containerd[1601]: time="2026-01-28T02:04:59.159163544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:59.176981 containerd[1601]: time="2026-01-28T02:04:59.175448674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:04:59.176981 containerd[1601]: time="2026-01-28T02:04:59.175667720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:59.177188 kubelet[1960]: E0128 02:04:59.175834 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:59.177188 kubelet[1960]: E0128 02:04:59.175996 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:04:59.177188 kubelet[1960]: E0128 02:04:59.176124 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ee390cb8e04c4e1abe7adde8491b183a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:59.184131 containerd[1601]: time="2026-01-28T02:04:59.183976659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:04:59.213151 kubelet[1960]: E0128 02:04:59.209239 1960 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 28 02:04:59.303357 containerd[1601]: time="2026-01-28T02:04:59.302001546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:04:59.310120 containerd[1601]: time="2026-01-28T02:04:59.309044981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:04:59.310120 containerd[1601]: time="2026-01-28T02:04:59.309128456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 02:04:59.310325 kubelet[1960]: E0128 02:04:59.309386 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:59.310325 kubelet[1960]: E0128 02:04:59.309443 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:04:59.314157 kubelet[1960]: E0128 02:04:59.310669 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hcnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54df6f8c4d-bq29n_calico-system(9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:04:59.314157 kubelet[1960]: E0128 02:04:59.313101 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54df6f8c4d-bq29n" podUID="9ae7cefc-65b0-4fcd-9083-f9b1fd7f5a6f" Jan 28 02:04:59.923467 containerd[1601]: time="2026-01-28T02:04:59.922712794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:05:00.000148 containerd[1601]: time="2026-01-28T02:04:59.999203602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 02:05:00.006498 containerd[1601]: time="2026-01-28T02:05:00.006291256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:05:00.006498 containerd[1601]: time="2026-01-28T02:05:00.006331828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 02:05:00.008000 kubelet[1960]: E0128 02:05:00.007264 1960 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:05:00.008000 kubelet[1960]: E0128 02:05:00.007364 1960 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:05:00.008000 kubelet[1960]: E0128 02:05:00.007641 1960 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jld9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5zdgq_calico-system(f4b6fba0-f381-4858-a71c-ba2619256e7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:05:00.011794 kubelet[1960]: E0128 02:05:00.010013 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5zdgq" podUID="f4b6fba0-f381-4858-a71c-ba2619256e7e"