Jan 21 06:15:13.753985 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 03:18:28 -00 2026 Jan 21 06:15:13.754015 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=81dc9acd509cfd27a090d5b49f20e13d238e4baed94e55e81b300154aedac937 Jan 21 06:15:13.754030 kernel: BIOS-provided physical RAM map: Jan 21 06:15:13.754039 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 21 06:15:13.754047 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 21 06:15:13.754055 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 21 06:15:13.754064 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 21 06:15:13.754074 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 21 06:15:13.754083 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 21 06:15:13.754094 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 21 06:15:13.754108 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 21 06:15:13.754118 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 21 06:15:13.754129 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 21 06:15:13.754141 kernel: NX (Execute Disable) protection: active Jan 21 06:15:13.754152 kernel: APIC: Static calls initialized Jan 21 06:15:13.754164 kernel: SMBIOS 2.8 present. Jan 21 06:15:13.754174 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 21 06:15:13.754183 kernel: DMI: Memory slots populated: 1/1 Jan 21 06:15:13.754191 kernel: Hypervisor detected: KVM Jan 21 06:15:13.754200 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 21 06:15:13.754209 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 21 06:15:13.754217 kernel: kvm-clock: using sched offset of 17878142148 cycles Jan 21 06:15:13.754227 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 21 06:15:13.754237 kernel: tsc: Detected 2445.426 MHz processor Jan 21 06:15:13.754251 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 21 06:15:13.754263 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 21 06:15:13.754274 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 21 06:15:13.754285 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 21 06:15:13.754297 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 21 06:15:13.754308 kernel: Using GB pages for direct mapping Jan 21 06:15:13.754319 kernel: ACPI: Early table checksum verification disabled Jan 21 06:15:13.754335 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 21 06:15:13.754347 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754360 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754373 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754385 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 21 06:15:13.754395 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754405 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754418 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754428 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 06:15:13.754442 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 21 06:15:13.754451 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 21 06:15:13.754461 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 21 06:15:13.754474 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 21 06:15:13.754483 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 21 06:15:13.754494 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 21 06:15:13.754506 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 21 06:15:13.754517 kernel: No NUMA configuration found Jan 21 06:15:13.754527 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 21 06:15:13.754538 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 21 06:15:13.754943 kernel: Zone ranges: Jan 21 06:15:13.754957 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 21 06:15:13.754969 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 21 06:15:13.754982 kernel: Normal empty Jan 21 06:15:13.754993 kernel: Device empty Jan 21 06:15:13.755003 kernel: Movable zone start for each node Jan 21 06:15:13.755012 kernel: Early memory node ranges Jan 21 06:15:13.755026 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 21 06:15:13.755036 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 21 06:15:13.755046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 21 06:15:13.755055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 06:15:13.755065 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 21 06:15:13.755075 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 21 06:15:13.755084 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 21 06:15:13.755095 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 21 06:15:13.755111 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 21 06:15:13.755123 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 21 06:15:13.755135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 21 06:15:13.755147 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 21 06:15:13.755159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 21 06:15:13.755172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 21 06:15:13.755180 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 21 06:15:13.755190 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 21 06:15:13.755197 kernel: TSC deadline timer available Jan 21 06:15:13.755204 kernel: CPU topo: Max. logical packages: 1 Jan 21 06:15:13.755211 kernel: CPU topo: Max. logical dies: 1 Jan 21 06:15:13.755218 kernel: CPU topo: Max. dies per package: 1 Jan 21 06:15:13.755224 kernel: CPU topo: Max. threads per core: 1 Jan 21 06:15:13.755231 kernel: CPU topo: Num. cores per package: 4 Jan 21 06:15:13.755238 kernel: CPU topo: Num. threads per package: 4 Jan 21 06:15:13.755247 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 21 06:15:13.755254 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 21 06:15:13.755261 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 21 06:15:13.755267 kernel: kvm-guest: setup PV sched yield Jan 21 06:15:13.755275 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 21 06:15:13.755281 kernel: Booting paravirtualized kernel on KVM Jan 21 06:15:13.755289 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 21 06:15:13.755297 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 21 06:15:13.755304 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 21 06:15:13.755311 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 21 06:15:13.755318 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 21 06:15:13.755325 kernel: kvm-guest: PV spinlocks enabled Jan 21 06:15:13.755332 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 21 06:15:13.755340 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=81dc9acd509cfd27a090d5b49f20e13d238e4baed94e55e81b300154aedac937 Jan 21 06:15:13.755349 kernel: random: crng init done Jan 21 06:15:13.755356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 21 06:15:13.755363 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 06:15:13.755370 kernel: Fallback order for Node 0: 0 Jan 21 06:15:13.755377 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 21 06:15:13.755383 kernel: Policy zone: DMA32 Jan 21 06:15:13.755390 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 06:15:13.755399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 21 06:15:13.755406 kernel: ftrace: allocating 40128 entries in 157 pages Jan 21 06:15:13.755413 kernel: ftrace: allocated 157 pages with 5 groups Jan 21 06:15:13.755420 kernel: Dynamic Preempt: voluntary Jan 21 06:15:13.755432 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 06:15:13.755451 kernel: rcu: RCU event tracing is enabled. Jan 21 06:15:13.755462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 21 06:15:13.755475 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 06:15:13.755485 kernel: Rude variant of Tasks RCU enabled. Jan 21 06:15:13.755495 kernel: Tracing variant of Tasks RCU enabled. Jan 21 06:15:13.755504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 06:15:13.755514 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 21 06:15:13.755524 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 21 06:15:13.755533 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 21 06:15:13.755546 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 21 06:15:13.755934 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 21 06:15:13.755945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 06:15:13.755964 kernel: Console: colour VGA+ 80x25 Jan 21 06:15:13.755977 kernel: printk: legacy console [ttyS0] enabled Jan 21 06:15:13.755987 kernel: ACPI: Core revision 20240827 Jan 21 06:15:13.755998 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 21 06:15:13.756008 kernel: APIC: Switch to symmetric I/O mode setup Jan 21 06:15:13.756018 kernel: x2apic enabled Jan 21 06:15:13.756028 kernel: APIC: Switched APIC routing to: physical x2apic Jan 21 06:15:13.756044 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 21 06:15:13.756057 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 21 06:15:13.756067 kernel: kvm-guest: setup PV IPIs Jan 21 06:15:13.756077 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 21 06:15:13.756401 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 21 06:15:13.756413 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 21 06:15:13.756424 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 21 06:15:13.756436 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 21 06:15:13.756448 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 21 06:15:13.756459 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 21 06:15:13.756476 kernel: Spectre V2 : Mitigation: Retpolines Jan 21 06:15:13.756488 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 21 06:15:13.756501 kernel: Speculative Store Bypass: Vulnerable Jan 21 06:15:13.756513 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 21 06:15:13.756527 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 21 06:15:13.756540 kernel: active return thunk: srso_alias_return_thunk Jan 21 06:15:13.756951 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 21 06:15:13.756970 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 21 06:15:13.756983 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 21 06:15:13.756996 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 21 06:15:13.757007 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 21 06:15:13.757019 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 21 06:15:13.757032 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 21 06:15:13.757045 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 21 06:15:13.757061 kernel: Freeing SMP alternatives memory: 32K Jan 21 06:15:13.757074 kernel: pid_max: default: 32768 minimum: 301 Jan 21 06:15:13.757087 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 06:15:13.757099 kernel: landlock: Up and running. Jan 21 06:15:13.757112 kernel: SELinux: Initializing. Jan 21 06:15:13.757124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 06:15:13.757137 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 06:15:13.757153 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 21 06:15:13.757166 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 21 06:15:13.757178 kernel: signal: max sigframe size: 1776 Jan 21 06:15:13.757190 kernel: rcu: Hierarchical SRCU implementation. Jan 21 06:15:13.757204 kernel: rcu: Max phase no-delay instances is 400. Jan 21 06:15:13.757217 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 06:15:13.757229 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 21 06:15:13.757246 kernel: smp: Bringing up secondary CPUs ... Jan 21 06:15:13.757260 kernel: smpboot: x86: Booting SMP configuration: Jan 21 06:15:13.757271 kernel: .... node #0, CPUs: #1 #2 #3 Jan 21 06:15:13.757282 kernel: smp: Brought up 1 node, 4 CPUs Jan 21 06:15:13.757293 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 21 06:15:13.757303 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120520K reserved, 0K cma-reserved) Jan 21 06:15:13.757314 kernel: devtmpfs: initialized Jan 21 06:15:13.757327 kernel: x86/mm: Memory block size: 128MB Jan 21 06:15:13.757338 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 06:15:13.757348 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 21 06:15:13.757358 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 06:15:13.757370 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 06:15:13.757382 kernel: audit: initializing netlink subsys (disabled) Jan 21 06:15:13.757395 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 06:15:13.757408 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 21 06:15:13.757424 kernel: audit: type=2000 audit(1768976091.872:1): state=initialized audit_enabled=0 res=1 Jan 21 06:15:13.757437 kernel: cpuidle: using governor menu Jan 21 06:15:13.757449 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 06:15:13.757462 kernel: dca service started, version 1.12.1 Jan 21 06:15:13.757475 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 21 06:15:13.757488 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 21 06:15:13.757501 kernel: PCI: Using configuration type 1 for base access Jan 21 06:15:13.757518 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 21 06:15:13.757532 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 06:15:13.757546 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 06:15:13.757971 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 06:15:13.757983 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 06:15:13.757993 kernel: ACPI: Added _OSI(Module Device) Jan 21 06:15:13.758003 kernel: ACPI: Added _OSI(Processor Device) Jan 21 06:15:13.758018 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 06:15:13.758028 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 21 06:15:13.758038 kernel: ACPI: Interpreter enabled Jan 21 06:15:13.758049 kernel: ACPI: PM: (supports S0 S3 S5) Jan 21 06:15:13.758059 kernel: ACPI: Using IOAPIC for interrupt routing Jan 21 06:15:13.758069 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 21 06:15:13.758083 kernel: PCI: Using E820 reservations for host bridge windows Jan 21 06:15:13.758098 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 21 06:15:13.758108 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 21 06:15:13.758432 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 21 06:15:13.759233 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 21 06:15:13.759488 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 21 06:15:13.759509 kernel: PCI host bridge to bus 0000:00 Jan 21 06:15:13.760138 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 21 06:15:13.760363 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 21 06:15:13.760967 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 21 06:15:13.761186 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 21 06:15:13.761350 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 21 06:15:13.761505 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 21 06:15:13.762180 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 21 06:15:13.762374 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 21 06:15:13.762943 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 21 06:15:13.763326 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 21 06:15:13.763521 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 21 06:15:13.764271 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 21 06:15:13.765264 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 21 06:15:13.765451 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 29296 usecs Jan 21 06:15:13.766071 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 21 06:15:13.766382 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 21 06:15:13.766943 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 21 06:15:13.767169 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 21 06:15:13.767404 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 21 06:15:13.768109 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 21 06:15:13.768348 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 21 06:15:13.768524 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 21 06:15:13.769135 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 21 06:15:13.769310 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 21 06:15:13.769524 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 21 06:15:13.770112 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 21 06:15:13.770328 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 21 06:15:13.770964 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 21 06:15:13.771197 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 21 06:15:13.771417 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 31250 usecs Jan 21 06:15:13.772128 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 21 06:15:13.772362 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 21 06:15:13.773011 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 21 06:15:13.773269 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 21 06:15:13.773497 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 21 06:15:13.773513 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 21 06:15:13.773524 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 21 06:15:13.773535 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 21 06:15:13.774242 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 21 06:15:13.774261 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 21 06:15:13.774278 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 21 06:15:13.774290 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 21 06:15:13.774302 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 21 06:15:13.774314 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 21 06:15:13.774325 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 21 06:15:13.774337 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 21 06:15:13.774349 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 21 06:15:13.774363 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 21 06:15:13.774376 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 21 06:15:13.774389 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 21 06:15:13.774402 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 21 06:15:13.774414 kernel: iommu: Default domain type: Translated Jan 21 06:15:13.774427 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 21 06:15:13.774440 kernel: PCI: Using ACPI for IRQ routing Jan 21 06:15:13.774456 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 21 06:15:13.774469 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 21 06:15:13.774482 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 21 06:15:13.775109 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 21 06:15:13.775328 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 21 06:15:13.775541 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 21 06:15:13.775918 kernel: vgaarb: loaded Jan 21 06:15:13.775935 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 21 06:15:13.775946 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 21 06:15:13.775957 kernel: clocksource: Switched to clocksource kvm-clock Jan 21 06:15:13.775968 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 06:15:13.775979 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 06:15:13.775989 kernel: pnp: PnP ACPI init Jan 21 06:15:13.776231 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 21 06:15:13.776252 kernel: pnp: PnP ACPI: found 6 devices Jan 21 06:15:13.776267 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 21 06:15:13.776277 kernel: NET: Registered PF_INET protocol family Jan 21 06:15:13.776288 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 21 06:15:13.776298 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 21 06:15:13.776309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 06:15:13.776323 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 06:15:13.776334 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 21 06:15:13.776344 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 21 06:15:13.776354 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 06:15:13.776365 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 06:15:13.776379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 06:15:13.776390 kernel: NET: Registered PF_XDP protocol family Jan 21 06:15:13.776984 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 21 06:15:13.777193 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 21 06:15:13.777390 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 21 06:15:13.777977 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 21 06:15:13.778180 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 21 06:15:13.778381 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 21 06:15:13.778397 kernel: PCI: CLS 0 bytes, default 64 Jan 21 06:15:13.778413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 21 06:15:13.778424 kernel: Initialise system trusted keyrings Jan 21 06:15:13.778435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 21 06:15:13.778446 kernel: Key type asymmetric registered Jan 21 06:15:13.778459 kernel: Asymmetric key parser 'x509' registered Jan 21 06:15:13.778470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 21 06:15:13.778480 kernel: io scheduler mq-deadline registered Jan 21 06:15:13.778494 kernel: io scheduler kyber registered Jan 21 06:15:13.778504 kernel: io scheduler bfq registered Jan 21 06:15:13.778515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 21 06:15:13.778526 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 21 06:15:13.778536 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 21 06:15:13.778547 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 21 06:15:13.778931 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 06:15:13.778946 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 06:15:13.778957 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 21 06:15:13.778967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 21 06:15:13.778978 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 21 06:15:13.779206 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 21 06:15:13.779223 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 21 06:15:13.779429 kernel: rtc_cmos 00:04: registered as rtc0 Jan 21 06:15:13.780036 kernel: rtc_cmos 00:04: setting system clock to 2026-01-21T06:15:06 UTC (1768976106) Jan 21 06:15:13.780249 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 21 06:15:13.780264 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 21 06:15:13.780275 kernel: NET: Registered PF_INET6 protocol family Jan 21 06:15:13.780286 kernel: Segment Routing with IPv6 Jan 21 06:15:13.780296 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 06:15:13.780307 kernel: NET: Registered PF_PACKET protocol family Jan 21 06:15:13.780322 kernel: Key type dns_resolver registered Jan 21 06:15:13.780336 kernel: IPI shorthand broadcast: enabled Jan 21 06:15:13.780347 kernel: sched_clock: Marking stable (8552181383, 4263776180)->(16212519671, -3396562108) Jan 21 06:15:13.780358 kernel: registered taskstats version 1 Jan 21 06:15:13.780368 kernel: Loading compiled-in X.509 certificates Jan 21 06:15:13.780378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: f14d5cffa2c990093d4ef20dbfb9c251267551e1' Jan 21 06:15:13.780389 kernel: Demotion targets for Node 0: null Jan 21 06:15:13.780402 kernel: Key type .fscrypt registered Jan 21 06:15:13.780413 kernel: Key type fscrypt-provisioning registered Jan 21 06:15:13.780423 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 06:15:13.780434 kernel: ima: Allocated hash algorithm: sha1 Jan 21 06:15:13.780448 kernel: ima: No architecture policies found Jan 21 06:15:13.780459 kernel: clk: Disabling unused clocks Jan 21 06:15:13.780469 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 21 06:15:13.780483 kernel: Write protecting the kernel read-only data: 47104k Jan 21 06:15:13.780493 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 21 06:15:13.780504 kernel: Run /init as init process Jan 21 06:15:13.780514 kernel: with arguments: Jan 21 06:15:13.780524 kernel: /init Jan 21 06:15:13.780535 kernel: with environment: Jan 21 06:15:13.780546 kernel: HOME=/ Jan 21 06:15:13.780926 kernel: TERM=linux Jan 21 06:15:13.780938 kernel: SCSI subsystem initialized Jan 21 06:15:13.780948 kernel: libata version 3.00 loaded. Jan 21 06:15:13.781174 kernel: ahci 0000:00:1f.2: version 3.0 Jan 21 06:15:13.781191 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 21 06:15:13.781406 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 21 06:15:13.782153 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 21 06:15:13.782423 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 21 06:15:13.783091 kernel: scsi host0: ahci Jan 21 06:15:13.783338 kernel: scsi host1: ahci Jan 21 06:15:13.784030 kernel: scsi host2: ahci Jan 21 06:15:13.784281 kernel: scsi host3: ahci Jan 21 06:15:13.785156 kernel: scsi host4: ahci Jan 21 06:15:13.785409 kernel: scsi host5: ahci Jan 21 06:15:13.785430 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 21 06:15:13.785445 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 21 06:15:13.785458 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 21 06:15:13.785471 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 21 06:15:13.785484 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 21 06:15:13.785499 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 21 06:15:13.785511 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 21 06:15:13.785525 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 21 06:15:13.785540 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 21 06:15:13.785919 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 21 06:15:13.785933 kernel: ata3.00: LPM support broken, forcing max_power Jan 21 06:15:13.785949 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 21 06:15:13.785960 kernel: ata3.00: applying bridge limits Jan 21 06:15:13.785971 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 21 06:15:13.785982 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 21 06:15:13.785992 kernel: ata3.00: LPM support broken, forcing max_power Jan 21 06:15:13.786003 kernel: ata3.00: configured for UDMA/100 Jan 21 06:15:13.786273 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 21 06:15:13.787302 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 21 06:15:13.787520 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 21 06:15:13.788152 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 21 06:15:13.788171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 21 06:15:13.788182 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 21 06:15:13.788193 kernel: GPT:16515071 != 27000831 Jan 21 06:15:13.788209 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 21 06:15:13.788220 kernel: GPT:16515071 != 27000831 Jan 21 06:15:13.788230 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 21 06:15:13.788244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 21 06:15:13.788476 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 21 06:15:13.788489 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 06:15:13.788497 kernel: device-mapper: uevent: version 1.0.3 Jan 21 06:15:13.788508 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 06:15:13.788516 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 21 06:15:13.788524 kernel: raid6: avx2x4 gen() 15062 MB/s Jan 21 06:15:13.788532 kernel: raid6: avx2x2 gen() 14808 MB/s Jan 21 06:15:13.788539 kernel: raid6: avx2x1 gen() 10088 MB/s Jan 21 06:15:13.788547 kernel: raid6: using algorithm avx2x4 gen() 15062 MB/s Jan 21 06:15:13.788944 kernel: raid6: .... xor() 3978 MB/s, rmw enabled Jan 21 06:15:13.788960 kernel: raid6: using avx2x2 recovery algorithm Jan 21 06:15:13.788972 kernel: xor: automatically using best checksumming function avx Jan 21 06:15:13.788986 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 06:15:13.789000 kernel: BTRFS: device fsid a1ceccde-d887-4c16-9a20-b31ca68e4074 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 21 06:15:13.789018 kernel: BTRFS info (device dm-0): first mount of filesystem a1ceccde-d887-4c16-9a20-b31ca68e4074 Jan 21 06:15:13.789027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 21 06:15:13.789035 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 06:15:13.789042 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 06:15:13.789050 kernel: loop: module loaded Jan 21 06:15:13.789058 kernel: loop0: detected capacity change from 0 to 100552 Jan 21 06:15:13.789066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 06:15:13.789077 systemd[1]: Successfully made /usr/ read-only. Jan 21 06:15:13.789088 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 06:15:13.789096 systemd[1]: Detected virtualization kvm. Jan 21 06:15:13.789104 systemd[1]: Detected architecture x86-64. Jan 21 06:15:13.789113 systemd[1]: Running in initrd. Jan 21 06:15:13.789121 systemd[1]: No hostname configured, using default hostname. Jan 21 06:15:13.789131 systemd[1]: Hostname set to . Jan 21 06:15:13.789139 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 06:15:13.789147 systemd[1]: Queued start job for default target initrd.target. Jan 21 06:15:13.789155 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 06:15:13.789163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 06:15:13.789171 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 06:15:13.789181 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 06:15:13.789191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 06:15:13.789200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 06:15:13.789208 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 06:15:13.789217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 06:15:13.789225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 06:15:13.789235 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 06:15:13.789244 systemd[1]: Reached target paths.target - Path Units. Jan 21 06:15:13.789252 systemd[1]: Reached target slices.target - Slice Units. Jan 21 06:15:13.789260 systemd[1]: Reached target swap.target - Swaps. Jan 21 06:15:13.789268 systemd[1]: Reached target timers.target - Timer Units. Jan 21 06:15:13.789276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 06:15:13.789285 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 06:15:13.789295 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 06:15:13.789303 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 06:15:13.789311 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 06:15:13.789319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 06:15:13.789327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 06:15:13.789335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 06:15:13.789344 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 06:15:13.789354 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 06:15:13.789362 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 06:15:13.789371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 06:15:13.789379 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 06:15:13.789387 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 06:15:13.789396 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 06:15:13.789406 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 06:15:13.789414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 06:15:13.789423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 06:15:13.789461 systemd-journald[320]: Collecting audit messages is enabled. Jan 21 06:15:13.789484 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 06:15:13.789493 kernel: audit: type=1130 audit(1768976113.788:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.789501 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 06:15:13.789512 systemd-journald[320]: Journal started Jan 21 06:15:13.789535 systemd-journald[320]: Runtime Journal (/run/log/journal/2512dfe7a48c497c8ed083a198dad8c0) is 6M, max 48.2M, 42.1M free. Jan 21 06:15:13.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.870060 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 06:15:13.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.870879 kernel: audit: type=1130 audit(1768976113.867:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.945123 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 06:15:13.994274 kernel: audit: type=1130 audit(1768976113.942:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:13.998425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 06:15:14.078010 kernel: audit: type=1130 audit(1768976113.993:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:14.127544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 06:15:15.316203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 06:15:15.316248 kernel: Bridge firewalling registered Jan 21 06:15:14.192472 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 21 06:15:15.353083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 06:15:15.429530 kernel: audit: type=1130 audit(1768976115.372:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.427233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 06:15:15.504997 kernel: audit: type=1130 audit(1768976115.450:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.462262 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 06:15:15.463099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 06:15:15.509266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 06:15:15.616245 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 06:15:15.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.661076 kernel: audit: type=1130 audit(1768976115.616:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.706075 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 06:15:15.781027 kernel: audit: type=1130 audit(1768976115.724:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.726556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 06:15:15.863093 kernel: audit: type=1130 audit(1768976115.780:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:15.787181 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 06:15:15.909225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 06:15:15.955228 dracut-cmdline[351]: dracut-109 Jan 21 06:15:15.974257 dracut-cmdline[351]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=81dc9acd509cfd27a090d5b49f20e13d238e4baed94e55e81b300154aedac937 Jan 21 06:15:16.061055 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 06:15:16.140015 kernel: audit: type=1130 audit(1768976116.060:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.064000 audit: BPF prog-id=6 op=LOAD Jan 21 06:15:16.106084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 06:15:16.210368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 06:15:16.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.361013 systemd-resolved[380]: Positive Trust Anchors: Jan 21 06:15:16.361156 systemd-resolved[380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 06:15:16.361164 systemd-resolved[380]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 06:15:16.361203 systemd-resolved[380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 06:15:16.421034 systemd-resolved[380]: Defaulting to hostname 'linux'. Jan 21 06:15:16.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.422994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 06:15:16.550814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 06:15:16.641898 kernel: Loading iSCSI transport class v2.0-870. Jan 21 06:15:16.667805 kernel: iscsi: registered transport (tcp) Jan 21 06:15:16.708986 kernel: iscsi: registered transport (qla4xxx) Jan 21 06:15:16.709065 kernel: QLogic iSCSI HBA Driver Jan 21 06:15:16.764738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 06:15:16.803561 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 06:15:16.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.808895 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 06:15:16.904557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 06:15:16.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:16.928350 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 06:15:16.938963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 06:15:17.019826 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 06:15:17.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.027000 audit: BPF prog-id=7 op=LOAD Jan 21 06:15:17.027000 audit: BPF prog-id=8 op=LOAD Jan 21 06:15:17.029147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 06:15:17.090006 systemd-udevd[583]: Using default interface naming scheme 'v257'. Jan 21 06:15:17.107177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 06:15:17.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.109907 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 06:15:17.173074 dracut-pre-trigger[615]: rd.md=0: removing MD RAID activation Jan 21 06:15:17.251340 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 06:15:17.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.265301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 06:15:17.314919 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 06:15:17.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.332000 audit: BPF prog-id=9 op=LOAD Jan 21 06:15:17.334854 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 06:15:17.419196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 06:15:17.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.424200 systemd-networkd[725]: lo: Link UP Jan 21 06:15:17.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.424207 systemd-networkd[725]: lo: Gained carrier Jan 21 06:15:17.428333 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 06:15:17.449908 systemd[1]: Reached target network.target - Network. Jan 21 06:15:17.470464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 06:15:17.541512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 21 06:15:17.561353 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 21 06:15:17.580565 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 21 06:15:17.619463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 21 06:15:17.648755 kernel: cryptd: max_cpu_qlen set to 1000 Jan 21 06:15:17.649171 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 06:15:17.679993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 06:15:17.733257 kernel: AES CTR mode by8 optimization enabled Jan 21 06:15:17.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.733341 disk-uuid[768]: Primary Header is updated. Jan 21 06:15:17.733341 disk-uuid[768]: Secondary Entries is updated. Jan 21 06:15:17.733341 disk-uuid[768]: Secondary Header is updated. Jan 21 06:15:17.790887 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 21 06:15:17.680391 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 06:15:17.690007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 06:15:17.702391 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 06:15:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.702398 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 06:15:17.705061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 06:15:18.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:17.706004 systemd-networkd[725]: eth0: Link UP Jan 21 06:15:17.706333 systemd-networkd[725]: eth0: Gained carrier Jan 21 06:15:17.706347 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 06:15:17.739957 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 21 06:15:17.911214 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 06:15:18.311892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 06:15:18.322055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 06:15:18.336551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 06:15:18.346327 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 06:15:18.357026 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 06:15:18.444207 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 06:15:18.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:18.813275 disk-uuid[777]: Warning: The kernel is still using the old partition table. Jan 21 06:15:18.813275 disk-uuid[777]: The new table will be used at the next reboot or after you Jan 21 06:15:18.813275 disk-uuid[777]: run partprobe(8) or kpartx(8) Jan 21 06:15:18.813275 disk-uuid[777]: The operation has completed successfully. Jan 21 06:15:18.853424 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 06:15:18.853985 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 06:15:18.913423 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 21 06:15:18.913466 kernel: audit: type=1130 audit(1768976118.859:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:18.913489 kernel: audit: type=1131 audit(1768976118.859:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:18.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:18.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:18.914215 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 06:15:19.011031 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 21 06:15:19.011096 kernel: BTRFS info (device vda6): first mount of filesystem 7507227a-f217-4f04-b931-d1b758f0e0f0 Jan 21 06:15:19.022286 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 06:15:19.045421 kernel: BTRFS info (device vda6): turning on async discard Jan 21 06:15:19.045501 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 06:15:19.069874 kernel: BTRFS info (device vda6): last unmount of filesystem 7507227a-f217-4f04-b931-d1b758f0e0f0 Jan 21 06:15:19.074459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 06:15:19.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:19.092070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 06:15:19.126162 kernel: audit: type=1130 audit(1768976119.089:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:19.256230 systemd-networkd[725]: eth0: Gained IPv6LL Jan 21 06:15:19.297961 ignition[878]: Ignition 2.24.0 Jan 21 06:15:19.298042 ignition[878]: Stage: fetch-offline Jan 21 06:15:19.298102 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:19.298118 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:19.298198 ignition[878]: parsed url from cmdline: "" Jan 21 06:15:19.298202 ignition[878]: no config URL provided Jan 21 06:15:19.298207 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 06:15:19.298217 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 21 06:15:19.298259 ignition[878]: op(1): [started] loading QEMU firmware config module Jan 21 06:15:19.298263 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 21 06:15:19.352392 ignition[878]: op(1): [finished] loading QEMU firmware config module Jan 21 06:15:20.436380 ignition[878]: parsing config with SHA512: eb6f30730b21e97c8385182d0922544029080fb2f27920df5d89c0af4ef1625c6e3cdc1be596f9def1da3d4d3c20b0ace91d011a85f6c5612b68354fd09b3c94 Jan 21 06:15:20.458466 unknown[878]: fetched base config from "system" Jan 21 06:15:20.458970 unknown[878]: fetched user config from "qemu" Jan 21 06:15:20.512035 kernel: audit: type=1130 audit(1768976120.468:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.459332 ignition[878]: fetch-offline: fetch-offline passed Jan 21 06:15:20.463592 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 06:15:20.459418 ignition[878]: Ignition finished successfully Jan 21 06:15:20.469823 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 21 06:15:20.474190 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 06:15:20.601367 ignition[889]: Ignition 2.24.0 Jan 21 06:15:20.601453 ignition[889]: Stage: kargs Jan 21 06:15:20.601827 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:20.601841 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:20.630965 ignition[889]: kargs: kargs passed Jan 21 06:15:20.631121 ignition[889]: Ignition finished successfully Jan 21 06:15:20.646412 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 06:15:20.679407 kernel: audit: type=1130 audit(1768976120.646:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.649879 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 06:15:20.755350 ignition[897]: Ignition 2.24.0 Jan 21 06:15:20.755423 ignition[897]: Stage: disks Jan 21 06:15:20.755590 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:20.755807 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:20.756808 ignition[897]: disks: disks passed Jan 21 06:15:20.757030 ignition[897]: Ignition finished successfully Jan 21 06:15:20.786359 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 06:15:20.827182 kernel: audit: type=1130 audit(1768976120.790:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:20.791219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 06:15:20.835970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 06:15:20.845146 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 06:15:20.862882 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 06:15:20.863088 systemd[1]: Reached target basic.target - Basic System. Jan 21 06:15:20.914869 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 06:15:20.991199 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 21 06:15:21.001060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 06:15:21.032358 kernel: audit: type=1130 audit(1768976121.010:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:21.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:21.013570 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 06:15:21.420591 kernel: EXT4-fs (vda9): mounted filesystem 3ff62864-5f9e-426d-9652-a1e94c623aaa r/w with ordered data mode. Quota mode: none. Jan 21 06:15:21.425018 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 06:15:21.426280 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 06:15:21.441001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 06:15:21.485969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 06:15:21.511948 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 21 06:15:21.486518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 21 06:15:21.486562 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 06:15:21.486593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 06:15:21.580457 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 06:15:21.619029 kernel: BTRFS info (device vda6): first mount of filesystem 7507227a-f217-4f04-b931-d1b758f0e0f0 Jan 21 06:15:21.619061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 06:15:21.619073 kernel: BTRFS info (device vda6): turning on async discard Jan 21 06:15:21.619084 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 06:15:21.601476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 06:15:21.643134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 06:15:22.136142 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 06:15:22.175872 kernel: audit: type=1130 audit(1768976122.135:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.139336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 06:15:22.196460 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 06:15:22.259106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 06:15:22.277824 kernel: BTRFS info (device vda6): last unmount of filesystem 7507227a-f217-4f04-b931-d1b758f0e0f0 Jan 21 06:15:22.305092 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 06:15:22.344939 kernel: audit: type=1130 audit(1768976122.304:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.345020 ignition[1012]: INFO : Ignition 2.24.0 Jan 21 06:15:22.345020 ignition[1012]: INFO : Stage: mount Jan 21 06:15:22.345020 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:22.345020 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:22.345020 ignition[1012]: INFO : mount: mount passed Jan 21 06:15:22.345020 ignition[1012]: INFO : Ignition finished successfully Jan 21 06:15:22.427310 kernel: audit: type=1130 audit(1768976122.351:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:22.343089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 06:15:22.374933 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 06:15:22.444228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 06:15:22.490992 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Jan 21 06:15:22.508870 kernel: BTRFS info (device vda6): first mount of filesystem 7507227a-f217-4f04-b931-d1b758f0e0f0 Jan 21 06:15:22.508946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 06:15:22.533735 kernel: BTRFS info (device vda6): turning on async discard Jan 21 06:15:22.533790 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 06:15:22.536907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 06:15:22.619092 ignition[1039]: INFO : Ignition 2.24.0 Jan 21 06:15:22.619092 ignition[1039]: INFO : Stage: files Jan 21 06:15:22.619092 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:22.619092 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:22.653951 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Jan 21 06:15:22.671248 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 06:15:22.671248 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 06:15:22.704218 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 06:15:22.717082 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 06:15:22.717082 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 06:15:22.717082 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 06:15:22.717082 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 21 06:15:22.706584 unknown[1039]: wrote ssh authorized keys file for user: core Jan 21 06:15:22.840896 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 06:15:22.933798 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 06:15:22.933798 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 06:15:22.979544 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 21 06:15:23.347539 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 06:15:23.833597 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 06:15:23.833597 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 21 06:15:23.873394 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 21 06:15:24.065538 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 21 06:15:24.082169 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 21 06:15:24.098149 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 21 06:15:24.098149 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 21 06:15:24.098149 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 06:15:24.141862 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 06:15:24.160885 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 06:15:24.160885 ignition[1039]: INFO : files: files passed Jan 21 06:15:24.160885 ignition[1039]: INFO : Ignition finished successfully Jan 21 06:15:24.199379 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 06:15:24.241268 kernel: audit: type=1130 audit(1768976124.210:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.215021 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 06:15:24.264117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 06:15:24.305537 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 06:15:24.308121 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 06:15:24.376942 kernel: audit: type=1130 audit(1768976124.325:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.376990 kernel: audit: type=1131 audit(1768976124.325:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.377309 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Jan 21 06:15:24.390911 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 06:15:24.390911 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 06:15:24.417549 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 06:15:24.465569 kernel: audit: type=1130 audit(1768976124.426:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.410246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 06:15:24.428288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 06:15:24.479156 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 06:15:24.630563 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 06:15:24.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.631091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 06:15:24.687183 kernel: audit: type=1130 audit(1768976124.651:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.687222 kernel: audit: type=1131 audit(1768976124.651:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.653209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 06:15:24.709964 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 06:15:24.721435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 06:15:24.724026 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 06:15:24.821571 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 06:15:24.864415 kernel: audit: type=1130 audit(1768976124.833:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.837307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 06:15:24.907580 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 06:15:24.907966 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 06:15:24.926998 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 06:15:24.947294 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 06:15:24.973023 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 06:15:25.014287 kernel: audit: type=1131 audit(1768976124.983:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:24.973358 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 06:15:24.984517 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 06:15:25.021414 systemd[1]: Stopped target basic.target - Basic System. Jan 21 06:15:25.023576 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 06:15:25.041254 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 06:15:25.057122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 06:15:25.088163 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 06:15:25.105483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 06:15:25.125356 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 06:15:25.144024 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 06:15:25.165475 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 06:15:25.201554 systemd[1]: Stopped target swap.target - Swaps. Jan 21 06:15:25.211387 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 06:15:25.253010 kernel: audit: type=1131 audit(1768976125.217:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.211555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 06:15:25.253348 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 06:15:25.270174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 06:15:25.298492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 06:15:25.300057 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 06:15:25.325335 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 06:15:25.325898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 06:15:25.376176 kernel: audit: type=1131 audit(1768976125.334:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.376558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 06:15:25.377958 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 06:15:25.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.410154 systemd[1]: Stopped target paths.target - Path Units. Jan 21 06:15:25.420157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 06:15:25.427535 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 06:15:25.445368 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 06:15:25.471356 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 06:15:25.472112 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 06:15:25.472257 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 06:15:25.489391 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 06:15:25.489529 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 06:15:25.515315 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 06:15:25.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.515429 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 06:15:25.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.528521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 06:15:25.529160 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 06:15:25.555191 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 06:15:25.555444 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 06:15:25.567102 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 06:15:25.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.584451 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 06:15:25.587092 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 06:15:25.670500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 06:15:25.688071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 06:15:25.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.688391 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 06:15:25.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.712545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 06:15:25.713012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 06:15:25.713404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 06:15:25.713551 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 06:15:25.823030 ignition[1096]: INFO : Ignition 2.24.0 Jan 21 06:15:25.823030 ignition[1096]: INFO : Stage: umount Jan 21 06:15:25.823030 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 06:15:25.823030 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 21 06:15:25.823030 ignition[1096]: INFO : umount: umount passed Jan 21 06:15:25.823030 ignition[1096]: INFO : Ignition finished successfully Jan 21 06:15:25.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.774270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 06:15:25.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.783068 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 06:15:25.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.822580 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 06:15:25.823938 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 06:15:25.833063 systemd[1]: Stopped target network.target - Network. Jan 21 06:15:25.846972 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 06:15:26.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.847103 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 06:15:26.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.859305 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 06:15:25.859423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 06:15:25.890532 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 06:15:25.890922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 06:15:25.918401 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 06:15:26.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.918504 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 06:15:25.929351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 06:15:26.123000 audit: BPF prog-id=9 op=UNLOAD Jan 21 06:15:25.954170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 06:15:25.966004 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 06:15:25.996124 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 06:15:26.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:25.996349 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 06:15:26.008082 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 06:15:26.008231 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 06:15:26.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.066251 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 06:15:26.238000 audit: BPF prog-id=6 op=UNLOAD Jan 21 06:15:26.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.066545 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 06:15:26.112382 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 06:15:26.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.120475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 06:15:26.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.120556 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 06:15:26.142980 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 06:15:26.161893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 06:15:26.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.161999 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 06:15:26.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.184522 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 06:15:26.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.194972 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 06:15:26.212326 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 06:15:26.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.233134 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 06:15:26.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.233258 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 06:15:26.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.248174 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 06:15:26.248254 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 06:15:26.272940 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 06:15:26.273264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 06:15:26.282263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 06:15:26.282362 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 06:15:26.295342 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 06:15:26.295390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 06:15:26.308408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 06:15:26.308507 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 06:15:26.326849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 06:15:26.326949 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 06:15:26.340541 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 06:15:26.340815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 06:15:26.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.356583 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 06:15:26.364942 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 06:15:26.365038 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 06:15:26.380403 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 06:15:26.380501 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 06:15:26.395492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 06:15:26.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:26.395879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 06:15:26.542085 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 06:15:26.542314 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 06:15:26.607218 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 06:15:26.607364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 06:15:26.616188 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 06:15:26.631501 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 06:15:26.659863 systemd[1]: Switching root. Jan 21 06:15:26.719987 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 21 06:15:26.720081 systemd-journald[320]: Journal stopped Jan 21 06:15:29.950099 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 06:15:29.950169 kernel: SELinux: policy capability open_perms=1 Jan 21 06:15:29.950189 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 06:15:29.950205 kernel: SELinux: policy capability always_check_network=0 Jan 21 06:15:29.950234 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 06:15:29.950249 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 06:15:29.950271 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 06:15:29.950285 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 06:15:29.950306 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 06:15:29.950330 systemd[1]: Successfully loaded SELinux policy in 136.669ms. Jan 21 06:15:29.950354 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.959ms. Jan 21 06:15:29.950377 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 06:15:29.950395 systemd[1]: Detected virtualization kvm. Jan 21 06:15:29.950414 systemd[1]: Detected architecture x86-64. Jan 21 06:15:29.950435 systemd[1]: Detected first boot. Jan 21 06:15:29.950453 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 06:15:29.950469 zram_generator::config[1139]: No configuration found. Jan 21 06:15:29.950487 kernel: Guest personality initialized and is inactive Jan 21 06:15:29.950502 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 21 06:15:29.950523 kernel: Initialized host personality Jan 21 06:15:29.950538 kernel: NET: Registered PF_VSOCK protocol family Jan 21 06:15:29.950554 systemd[1]: Populated /etc with preset unit settings. Jan 21 06:15:29.950571 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 06:15:29.950589 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 06:15:29.950810 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 06:15:29.950837 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 06:15:29.950858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 06:15:29.950877 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 06:15:29.950898 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 06:15:29.950914 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 06:15:29.950930 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 06:15:29.950949 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 06:15:29.950966 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 06:15:29.950984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 06:15:29.951001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 06:15:29.951019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 06:15:29.951035 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 06:15:29.951052 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 06:15:29.951070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 06:15:29.951087 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 06:15:29.951106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 06:15:29.951123 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 06:15:29.951141 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 06:15:29.951159 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 06:15:29.951175 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 06:15:29.951192 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 06:15:29.951213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 06:15:29.951229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 06:15:29.951246 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 06:15:29.951264 systemd[1]: Reached target slices.target - Slice Units. Jan 21 06:15:29.951280 systemd[1]: Reached target swap.target - Swaps. Jan 21 06:15:29.951296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 06:15:29.951312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 06:15:29.951331 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 06:15:29.951350 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 06:15:29.951367 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 06:15:29.951385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 06:15:29.951402 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 06:15:29.951418 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 06:15:29.951434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 06:15:29.951453 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 06:15:29.951472 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 06:15:29.951489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 06:15:29.951507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 06:15:29.951525 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 06:15:29.951541 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 06:15:29.951557 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 06:15:29.951575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 06:15:29.951595 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 06:15:29.951834 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 06:15:29.951856 systemd[1]: Reached target machines.target - Containers. Jan 21 06:15:29.951873 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 06:15:29.951889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 06:15:29.951905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 06:15:29.951928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 06:15:29.951945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 06:15:29.951962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 06:15:29.951979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 06:15:29.951998 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 06:15:29.952014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 06:15:29.952030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 06:15:29.952050 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 06:15:29.952069 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 06:15:29.952085 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 06:15:29.952100 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 21 06:15:29.952119 kernel: audit: type=1131 audit(1768976129.667:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:29.952137 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 06:15:29.952155 kernel: audit: type=1131 audit(1768976129.719:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:29.952175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 06:15:29.952193 kernel: audit: type=1334 audit(1768976129.762:102): prog-id=14 op=UNLOAD Jan 21 06:15:29.952208 kernel: audit: type=1334 audit(1768976129.762:103): prog-id=13 op=UNLOAD Jan 21 06:15:29.952223 kernel: audit: type=1334 audit(1768976129.772:104): prog-id=15 op=LOAD Jan 21 06:15:29.952239 kernel: audit: type=1334 audit(1768976129.791:105): prog-id=16 op=LOAD Jan 21 06:15:29.952254 kernel: ACPI: bus type drm_connector registered Jan 21 06:15:29.952274 kernel: audit: type=1334 audit(1768976129.800:106): prog-id=17 op=LOAD Jan 21 06:15:29.952290 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 06:15:29.952306 kernel: fuse: init (API version 7.41) Jan 21 06:15:29.952324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 06:15:29.952343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 06:15:29.952359 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 06:15:29.952376 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 06:15:29.952418 systemd-journald[1225]: Collecting audit messages is enabled. Jan 21 06:15:29.952452 kernel: audit: type=1305 audit(1768976129.946:107): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 06:15:29.952472 systemd-journald[1225]: Journal started Jan 21 06:15:29.952499 systemd-journald[1225]: Runtime Journal (/run/log/journal/2512dfe7a48c497c8ed083a198dad8c0) is 6M, max 48.2M, 42.1M free. Jan 21 06:15:28.968000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 21 06:15:29.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:29.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:29.762000 audit: BPF prog-id=14 op=UNLOAD Jan 21 06:15:29.762000 audit: BPF prog-id=13 op=UNLOAD Jan 21 06:15:29.772000 audit: BPF prog-id=15 op=LOAD Jan 21 06:15:29.791000 audit: BPF prog-id=16 op=LOAD Jan 21 06:15:29.800000 audit: BPF prog-id=17 op=LOAD Jan 21 06:15:29.946000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 06:15:28.336554 systemd[1]: Queued start job for default target multi-user.target. Jan 21 06:15:28.369440 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 21 06:15:28.371231 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 06:15:28.371933 systemd[1]: systemd-journald.service: Consumed 2.840s CPU time. Jan 21 06:15:29.946000 audit[1225]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff2a9a2af0 a2=4000 a3=0 items=0 ppid=1 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:29.997909 kernel: audit: type=1300 audit(1768976129.946:107): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff2a9a2af0 a2=4000 a3=0 items=0 ppid=1 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:29.998007 kernel: audit: type=1327 audit(1768976129.946:107): proctitle="/usr/lib/systemd/systemd-journald" Jan 21 06:15:29.946000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 06:15:30.030850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 06:15:30.058860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 06:15:30.070834 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 06:15:30.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.082187 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 06:15:30.090228 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 06:15:30.100225 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 06:15:30.110837 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 06:15:30.120130 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 06:15:30.129353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 06:15:30.137378 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 06:15:30.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.147195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 06:15:30.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.157317 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 06:15:30.158291 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 06:15:30.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.169240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 06:15:30.169584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 06:15:30.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.181060 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 06:15:30.181463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 06:15:30.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.192285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 06:15:30.193020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 06:15:30.206327 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 06:15:30.207062 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 06:15:30.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.218540 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 06:15:30.219515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 06:15:30.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.231107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 06:15:30.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.242853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 06:15:30.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.257460 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 06:15:30.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.269269 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 06:15:30.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.281862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 06:15:30.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.315270 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 06:15:30.327401 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 06:15:30.340322 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 06:15:30.351425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 06:15:30.362309 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 06:15:30.362426 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 06:15:30.372533 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 06:15:30.383094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 06:15:30.383321 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 06:15:30.387360 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 06:15:30.401080 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 06:15:30.413106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 06:15:30.415463 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 06:15:30.426868 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 06:15:30.429221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 06:15:30.442060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 06:15:30.453376 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 06:15:30.464169 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 06:15:30.476177 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 06:15:30.485515 systemd-journald[1225]: Time spent on flushing to /var/log/journal/2512dfe7a48c497c8ed083a198dad8c0 is 19.448ms for 1121 entries. Jan 21 06:15:30.485515 systemd-journald[1225]: System Journal (/var/log/journal/2512dfe7a48c497c8ed083a198dad8c0) is 8M, max 163.5M, 155.5M free. Jan 21 06:15:30.519310 systemd-journald[1225]: Received client request to flush runtime journal. Jan 21 06:15:30.519365 kernel: loop1: detected capacity change from 0 to 229808 Jan 21 06:15:30.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.486262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 06:15:30.511321 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 06:15:30.522570 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 06:15:30.539932 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 06:15:30.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.552960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 06:15:30.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.577016 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 06:15:30.589022 kernel: loop2: detected capacity change from 0 to 50784 Jan 21 06:15:30.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.597000 audit: BPF prog-id=18 op=LOAD Jan 21 06:15:30.598000 audit: BPF prog-id=19 op=LOAD Jan 21 06:15:30.598000 audit: BPF prog-id=20 op=LOAD Jan 21 06:15:30.601226 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 06:15:30.612000 audit: BPF prog-id=21 op=LOAD Jan 21 06:15:30.615943 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 06:15:30.627275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 06:15:30.636520 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 06:15:30.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.655000 audit: BPF prog-id=22 op=LOAD Jan 21 06:15:30.657000 audit: BPF prog-id=23 op=LOAD Jan 21 06:15:30.657000 audit: BPF prog-id=24 op=LOAD Jan 21 06:15:30.659486 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 06:15:30.671000 audit: BPF prog-id=25 op=LOAD Jan 21 06:15:30.671000 audit: BPF prog-id=26 op=LOAD Jan 21 06:15:30.672000 audit: BPF prog-id=27 op=LOAD Jan 21 06:15:30.675157 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 06:15:30.700851 kernel: loop3: detected capacity change from 0 to 111560 Jan 21 06:15:30.720220 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 21 06:15:30.720337 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 21 06:15:30.737285 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 06:15:30.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.776035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 06:15:30.784075 systemd-nsresourced[1282]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 06:15:30.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:30.788570 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 06:15:30.812893 kernel: loop4: detected capacity change from 0 to 229808 Jan 21 06:15:30.845990 kernel: loop5: detected capacity change from 0 to 50784 Jan 21 06:15:30.873147 kernel: loop6: detected capacity change from 0 to 111560 Jan 21 06:15:30.886417 (sd-merge)[1291]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 21 06:15:30.891868 (sd-merge)[1291]: Merged extensions into '/usr'. Jan 21 06:15:30.901108 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 06:15:30.901246 systemd[1]: Reloading... Jan 21 06:15:30.902321 systemd-oomd[1277]: No swap; memory pressure usage will be degraded Jan 21 06:15:30.948085 systemd-resolved[1278]: Positive Trust Anchors: Jan 21 06:15:30.948180 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 06:15:30.948186 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 06:15:30.948214 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 06:15:30.958344 systemd-resolved[1278]: Defaulting to hostname 'linux'. Jan 21 06:15:31.001899 zram_generator::config[1331]: No configuration found. Jan 21 06:15:31.269044 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 06:15:31.269426 systemd[1]: Reloading finished in 367 ms. Jan 21 06:15:31.316388 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 06:15:31.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:31.328370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 06:15:31.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:31.339254 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 06:15:31.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:31.349967 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 06:15:31.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:31.373455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 06:15:31.400985 systemd[1]: Starting ensure-sysext.service... Jan 21 06:15:31.408458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 06:15:31.419000 audit: BPF prog-id=8 op=UNLOAD Jan 21 06:15:31.419000 audit: BPF prog-id=7 op=UNLOAD Jan 21 06:15:31.420000 audit: BPF prog-id=28 op=LOAD Jan 21 06:15:31.420000 audit: BPF prog-id=29 op=LOAD Jan 21 06:15:31.423259 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 06:15:31.434000 audit: BPF prog-id=30 op=LOAD Jan 21 06:15:31.434000 audit: BPF prog-id=15 op=UNLOAD Jan 21 06:15:31.434000 audit: BPF prog-id=31 op=LOAD Jan 21 06:15:31.434000 audit: BPF prog-id=32 op=LOAD Jan 21 06:15:31.434000 audit: BPF prog-id=16 op=UNLOAD Jan 21 06:15:31.434000 audit: BPF prog-id=17 op=UNLOAD Jan 21 06:15:31.436000 audit: BPF prog-id=33 op=LOAD Jan 21 06:15:31.436000 audit: BPF prog-id=18 op=UNLOAD Jan 21 06:15:31.436000 audit: BPF prog-id=34 op=LOAD Jan 21 06:15:31.436000 audit: BPF prog-id=35 op=LOAD Jan 21 06:15:31.437000 audit: BPF prog-id=19 op=UNLOAD Jan 21 06:15:31.437000 audit: BPF prog-id=20 op=UNLOAD Jan 21 06:15:31.438000 audit: BPF prog-id=36 op=LOAD Jan 21 06:15:31.438000 audit: BPF prog-id=22 op=UNLOAD Jan 21 06:15:31.438000 audit: BPF prog-id=37 op=LOAD Jan 21 06:15:31.438000 audit: BPF prog-id=38 op=LOAD Jan 21 06:15:31.438000 audit: BPF prog-id=23 op=UNLOAD Jan 21 06:15:31.438000 audit: BPF prog-id=24 op=UNLOAD Jan 21 06:15:31.440000 audit: BPF prog-id=39 op=LOAD Jan 21 06:15:31.440000 audit: BPF prog-id=21 op=UNLOAD Jan 21 06:15:31.441000 audit: BPF prog-id=40 op=LOAD Jan 21 06:15:31.441000 audit: BPF prog-id=25 op=UNLOAD Jan 21 06:15:31.441000 audit: BPF prog-id=41 op=LOAD Jan 21 06:15:31.441000 audit: BPF prog-id=42 op=LOAD Jan 21 06:15:31.441000 audit: BPF prog-id=26 op=UNLOAD Jan 21 06:15:31.441000 audit: BPF prog-id=27 op=UNLOAD Jan 21 06:15:31.451219 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 21 06:15:31.451296 systemd[1]: Reloading... Jan 21 06:15:31.452491 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 06:15:31.452837 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 06:15:31.453140 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 06:15:31.455142 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 21 06:15:31.455313 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 21 06:15:31.467518 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 06:15:31.467593 systemd-tmpfiles[1369]: Skipping /boot Jan 21 06:15:31.488295 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 06:15:31.488378 systemd-tmpfiles[1369]: Skipping /boot Jan 21 06:15:31.501300 systemd-udevd[1370]: Using default interface naming scheme 'v257'. Jan 21 06:15:31.568974 zram_generator::config[1401]: No configuration found. Jan 21 06:15:31.715991 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 06:15:31.739124 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 21 06:15:31.751849 kernel: ACPI: button: Power Button [PWRF] Jan 21 06:15:31.777169 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 21 06:15:31.788505 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 21 06:15:31.889399 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 06:15:31.890002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 21 06:15:31.899865 systemd[1]: Reloading finished in 448 ms. Jan 21 06:15:31.910111 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 06:15:31.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:31.921000 audit: BPF prog-id=43 op=LOAD Jan 21 06:15:31.921000 audit: BPF prog-id=44 op=LOAD Jan 21 06:15:31.921000 audit: BPF prog-id=28 op=UNLOAD Jan 21 06:15:31.921000 audit: BPF prog-id=29 op=UNLOAD Jan 21 06:15:31.922000 audit: BPF prog-id=45 op=LOAD Jan 21 06:15:31.923000 audit: BPF prog-id=33 op=UNLOAD Jan 21 06:15:31.925000 audit: BPF prog-id=46 op=LOAD Jan 21 06:15:31.925000 audit: BPF prog-id=47 op=LOAD Jan 21 06:15:31.925000 audit: BPF prog-id=34 op=UNLOAD Jan 21 06:15:31.925000 audit: BPF prog-id=35 op=UNLOAD Jan 21 06:15:31.926000 audit: BPF prog-id=48 op=LOAD Jan 21 06:15:31.926000 audit: BPF prog-id=40 op=UNLOAD Jan 21 06:15:31.926000 audit: BPF prog-id=49 op=LOAD Jan 21 06:15:31.926000 audit: BPF prog-id=50 op=LOAD Jan 21 06:15:31.926000 audit: BPF prog-id=41 op=UNLOAD Jan 21 06:15:31.926000 audit: BPF prog-id=42 op=UNLOAD Jan 21 06:15:31.928000 audit: BPF prog-id=51 op=LOAD Jan 21 06:15:31.928000 audit: BPF prog-id=30 op=UNLOAD Jan 21 06:15:31.928000 audit: BPF prog-id=52 op=LOAD Jan 21 06:15:31.928000 audit: BPF prog-id=53 op=LOAD Jan 21 06:15:31.928000 audit: BPF prog-id=31 op=UNLOAD Jan 21 06:15:31.928000 audit: BPF prog-id=32 op=UNLOAD Jan 21 06:15:31.929000 audit: BPF prog-id=54 op=LOAD Jan 21 06:15:31.933000 audit: BPF prog-id=39 op=UNLOAD Jan 21 06:15:31.934000 audit: BPF prog-id=55 op=LOAD Jan 21 06:15:31.934000 audit: BPF prog-id=36 op=UNLOAD Jan 21 06:15:31.934000 audit: BPF prog-id=56 op=LOAD Jan 21 06:15:31.934000 audit: BPF prog-id=57 op=LOAD Jan 21 06:15:31.934000 audit: BPF prog-id=37 op=UNLOAD Jan 21 06:15:31.934000 audit: BPF prog-id=38 op=UNLOAD Jan 21 06:15:31.944344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 06:15:31.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.231129 systemd[1]: Finished ensure-sysext.service. Jan 21 06:15:32.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.246322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 06:15:32.248311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 06:15:32.268266 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 06:15:32.278409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 06:15:32.284132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 06:15:32.305490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 06:15:32.315212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 06:15:32.327095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 06:15:32.335396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 06:15:32.335546 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 06:15:32.337576 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 06:15:32.352574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 06:15:32.363447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 06:15:32.375260 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 06:15:32.386335 kernel: kvm_amd: TSC scaling supported Jan 21 06:15:32.393017 kernel: kvm_amd: Nested Virtualization enabled Jan 21 06:15:32.393035 kernel: kvm_amd: Nested Paging enabled Jan 21 06:15:32.393225 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 21 06:15:32.411844 kernel: kvm_amd: PMU virtualization is disabled Jan 21 06:15:32.415000 audit: BPF prog-id=58 op=LOAD Jan 21 06:15:32.425011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 06:15:32.449000 audit: BPF prog-id=59 op=LOAD Jan 21 06:15:32.455996 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 21 06:15:32.473282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 06:15:32.518183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 06:15:32.530000 audit[1503]: SYSTEM_BOOT pid=1503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.532996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 06:15:32.547553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 06:15:32.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.574280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 06:15:32.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.595592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 06:15:32.598017 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 06:15:32.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.617251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 06:15:32.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.627109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 06:15:32.627396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 06:15:32.643145 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 06:15:32.643471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 06:15:32.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.653969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 06:15:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.665866 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 06:15:32.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.694154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 06:15:32.694493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 06:15:32.694589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 06:15:32.701487 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 06:15:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:32.723000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 06:15:32.723000 audit[1525]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffece1a4020 a2=420 a3=0 items=0 ppid=1483 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:32.723000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 06:15:32.728374 augenrules[1525]: No rules Jan 21 06:15:32.754281 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 06:15:32.757196 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 06:15:32.900062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 21 06:15:32.931538 kernel: EDAC MC: Ver: 3.0.0 Jan 21 06:15:32.941392 systemd-networkd[1500]: lo: Link UP Jan 21 06:15:32.941470 systemd-networkd[1500]: lo: Gained carrier Jan 21 06:15:32.948095 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 06:15:32.948110 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 06:15:32.952797 systemd-networkd[1500]: eth0: Link UP Jan 21 06:15:32.955466 systemd-networkd[1500]: eth0: Gained carrier Jan 21 06:15:32.955503 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 06:15:32.982103 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 21 06:15:32.985228 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jan 21 06:15:33.553863 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 21 06:15:33.553940 systemd-timesyncd[1501]: Initial clock synchronization to Wed 2026-01-21 06:15:33.553512 UTC. Jan 21 06:15:33.554298 systemd-resolved[1278]: Clock change detected. Flushing caches. Jan 21 06:15:34.035374 ldconfig[1490]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 06:15:34.056559 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 06:15:34.067235 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 06:15:34.076490 systemd[1]: Reached target network.target - Network. Jan 21 06:15:34.077401 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 06:15:34.088954 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 06:15:34.102117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 06:15:34.107122 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 06:15:34.136276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 06:15:34.152570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 06:15:34.165399 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 06:15:34.176393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 06:15:34.189178 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 06:15:34.201382 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 21 06:15:34.213334 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 06:15:34.224311 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 06:15:34.237244 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 06:15:34.250540 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 06:15:34.262349 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 06:15:34.274203 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 06:15:34.274326 systemd[1]: Reached target paths.target - Path Units. Jan 21 06:15:34.282139 systemd[1]: Reached target timers.target - Timer Units. Jan 21 06:15:34.291936 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 06:15:34.305105 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 06:15:34.317337 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 06:15:34.327189 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 06:15:34.337109 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 06:15:34.350210 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 06:15:34.359224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 06:15:34.371550 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 06:15:34.383457 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 06:15:34.394270 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 06:15:34.402082 systemd[1]: Reached target basic.target - Basic System. Jan 21 06:15:34.409987 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 06:15:34.410120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 06:15:34.413109 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 06:15:34.446136 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 06:15:34.456522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 06:15:34.483898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 06:15:34.496451 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 06:15:34.501450 jq[1552]: false Jan 21 06:15:34.506159 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 06:15:34.508397 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 21 06:15:34.520946 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 06:15:34.540614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 06:15:34.551880 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jan 21 06:15:34.548480 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jan 21 06:15:34.556141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 06:15:34.558321 extend-filesystems[1553]: Found /dev/vda6 Jan 21 06:15:34.579556 extend-filesystems[1553]: Found /dev/vda9 Jan 21 06:15:34.579556 extend-filesystems[1553]: Checking size of /dev/vda9 Jan 21 06:15:34.574150 oslogin_cache_refresh[1554]: Failure getting users, quitting Jan 21 06:15:34.568292 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 06:15:34.609312 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Jan 21 06:15:34.609312 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 06:15:34.609312 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Jan 21 06:15:34.609312 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Jan 21 06:15:34.609312 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 06:15:34.574169 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 06:15:34.593526 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 06:15:34.574219 oslogin_cache_refresh[1554]: Refreshing group entry cache Jan 21 06:15:34.594148 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 06:15:34.608107 oslogin_cache_refresh[1554]: Failure getting groups, quitting Jan 21 06:15:34.594864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 06:15:34.608122 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 06:15:34.598902 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 06:15:34.621036 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 06:15:34.638842 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 06:15:34.651248 update_engine[1572]: I20260121 06:15:34.643857 1572 main.cc:92] Flatcar Update Engine starting Jan 21 06:15:34.639405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 06:15:34.639999 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 06:15:34.640420 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 21 06:15:34.640971 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 21 06:15:34.653302 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 06:15:34.659905 extend-filesystems[1553]: Resized partition /dev/vda9 Jan 21 06:15:34.668380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 06:15:34.683353 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 06:15:34.684544 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 06:15:34.690890 extend-filesystems[1582]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 06:15:34.714476 jq[1576]: true Jan 21 06:15:34.733189 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 21 06:15:34.756360 jq[1598]: true Jan 21 06:15:34.785465 tar[1585]: linux-amd64/LICENSE Jan 21 06:15:34.788556 tar[1585]: linux-amd64/helm Jan 21 06:15:34.800948 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 21 06:15:34.833343 dbus-daemon[1550]: [system] SELinux support is enabled Jan 21 06:15:34.833932 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 06:15:34.840189 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 21 06:15:34.840189 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 21 06:15:34.840189 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 21 06:15:34.882170 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Jan 21 06:15:34.902823 update_engine[1572]: I20260121 06:15:34.856923 1572 update_check_scheduler.cc:74] Next update check in 9m16s Jan 21 06:15:34.850203 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 06:15:34.850581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 06:15:34.903998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 06:15:34.904037 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 06:15:34.917138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 06:15:34.917262 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 06:15:34.928564 systemd[1]: Started update-engine.service - Update Engine. Jan 21 06:15:34.940971 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Jan 21 06:15:34.942161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 06:15:34.950250 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 06:15:34.959332 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (Power Button) Jan 21 06:15:34.959359 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 21 06:15:34.960405 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 21 06:15:34.971882 systemd-logind[1571]: New seat seat0. Jan 21 06:15:34.976204 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 06:15:35.093922 containerd[1588]: time="2026-01-21T06:15:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 06:15:35.094323 containerd[1588]: time="2026-01-21T06:15:35.094230111Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 06:15:35.097557 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 06:15:35.118567 containerd[1588]: time="2026-01-21T06:15:35.118436242Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.843µs" Jan 21 06:15:35.118567 containerd[1588]: time="2026-01-21T06:15:35.118562127Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 06:15:35.118905 containerd[1588]: time="2026-01-21T06:15:35.118888146Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 06:15:35.118935 containerd[1588]: time="2026-01-21T06:15:35.118910738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 06:15:35.119283 containerd[1588]: time="2026-01-21T06:15:35.119166475Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 06:15:35.119283 containerd[1588]: time="2026-01-21T06:15:35.119270850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 06:15:35.119470 containerd[1588]: time="2026-01-21T06:15:35.119366308Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 06:15:35.119500 containerd[1588]: time="2026-01-21T06:15:35.119469581Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120031400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120060473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120076904Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120088686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120320259Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120338002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 06:15:35.120868 containerd[1588]: time="2026-01-21T06:15:35.120457134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.121336 containerd[1588]: time="2026-01-21T06:15:35.121314064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.121438 containerd[1588]: time="2026-01-21T06:15:35.121415102Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 06:15:35.121503 containerd[1588]: time="2026-01-21T06:15:35.121487708Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 06:15:35.123325 containerd[1588]: time="2026-01-21T06:15:35.121596901Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 06:15:35.123325 containerd[1588]: time="2026-01-21T06:15:35.122244981Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 06:15:35.123325 containerd[1588]: time="2026-01-21T06:15:35.122327395Z" level=info msg="metadata content store policy set" policy=shared Jan 21 06:15:35.137183 containerd[1588]: time="2026-01-21T06:15:35.137144511Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 06:15:35.137860 containerd[1588]: time="2026-01-21T06:15:35.137603618Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 06:15:35.138035 containerd[1588]: time="2026-01-21T06:15:35.138006310Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 06:15:35.138130 containerd[1588]: time="2026-01-21T06:15:35.138109573Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 06:15:35.138205 containerd[1588]: time="2026-01-21T06:15:35.138186025Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 06:15:35.138273 containerd[1588]: time="2026-01-21T06:15:35.138259482Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 06:15:35.138323 containerd[1588]: time="2026-01-21T06:15:35.138312321Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 06:15:35.138364 containerd[1588]: time="2026-01-21T06:15:35.138355091Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 06:15:35.138405 containerd[1588]: time="2026-01-21T06:15:35.138395667Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 06:15:35.138457 containerd[1588]: time="2026-01-21T06:15:35.138445450Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 06:15:35.138500 containerd[1588]: time="2026-01-21T06:15:35.138490204Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 06:15:35.138539 containerd[1588]: time="2026-01-21T06:15:35.138530088Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 06:15:35.138577 containerd[1588]: time="2026-01-21T06:15:35.138567909Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 06:15:35.138981 containerd[1588]: time="2026-01-21T06:15:35.138607442Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 06:15:35.139227 containerd[1588]: time="2026-01-21T06:15:35.139203314Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 06:15:35.139313 containerd[1588]: time="2026-01-21T06:15:35.139294735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 06:15:35.139393 containerd[1588]: time="2026-01-21T06:15:35.139371689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139442832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139467047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139477466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139487695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139501751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139511319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139520216Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139529664Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139550282Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139589826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 06:15:35.140047 containerd[1588]: time="2026-01-21T06:15:35.139605054Z" level=info msg="Start snapshots syncer" Jan 21 06:15:35.140612 containerd[1588]: time="2026-01-21T06:15:35.140591225Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 06:15:35.141160 containerd[1588]: time="2026-01-21T06:15:35.141123428Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141414472Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141478691Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141614836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141852950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141863390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141874460Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141891292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141904125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141913203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141921248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141931096Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141964138Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141975729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 06:15:35.142454 containerd[1588]: time="2026-01-21T06:15:35.141983513Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.141999654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142014632Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142028769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142051451Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142066920Z" level=info msg="runtime interface created" Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142075195Z" level=info msg="created NRI interface" Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142088780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142102856Z" level=info msg="Connect containerd service" Jan 21 06:15:35.143018 containerd[1588]: time="2026-01-21T06:15:35.142125188Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 06:15:35.144557 containerd[1588]: time="2026-01-21T06:15:35.144484552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 06:15:35.296516 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 06:15:35.305540 containerd[1588]: time="2026-01-21T06:15:35.305499907Z" level=info msg="Start subscribing containerd event" Jan 21 06:15:35.305942 containerd[1588]: time="2026-01-21T06:15:35.305920391Z" level=info msg="Start recovering state" Jan 21 06:15:35.306127 containerd[1588]: time="2026-01-21T06:15:35.306106018Z" level=info msg="Start event monitor" Jan 21 06:15:35.306205 containerd[1588]: time="2026-01-21T06:15:35.306186940Z" level=info msg="Start cni network conf syncer for default" Jan 21 06:15:35.306268 containerd[1588]: time="2026-01-21T06:15:35.306253584Z" level=info msg="Start streaming server" Jan 21 06:15:35.306498 containerd[1588]: time="2026-01-21T06:15:35.306322713Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 06:15:35.306498 containerd[1588]: time="2026-01-21T06:15:35.306339173Z" level=info msg="runtime interface starting up..." Jan 21 06:15:35.306498 containerd[1588]: time="2026-01-21T06:15:35.306348130Z" level=info msg="starting plugins..." Jan 21 06:15:35.306498 containerd[1588]: time="2026-01-21T06:15:35.306367346Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 06:15:35.308212 containerd[1588]: time="2026-01-21T06:15:35.308080674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 06:15:35.308212 containerd[1588]: time="2026-01-21T06:15:35.308160924Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 06:15:35.310043 containerd[1588]: time="2026-01-21T06:15:35.310022990Z" level=info msg="containerd successfully booted in 0.219829s" Jan 21 06:15:35.310345 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 06:15:35.318549 tar[1585]: linux-amd64/README.md Jan 21 06:15:35.346343 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 06:15:35.359055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 06:15:35.375104 systemd-networkd[1500]: eth0: Gained IPv6LL Jan 21 06:15:35.376547 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 06:15:35.387146 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 06:15:35.402493 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 06:15:35.417256 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 21 06:15:35.433229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:15:35.449035 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 06:15:35.460006 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 06:15:35.460474 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 06:15:35.492971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 06:15:35.531012 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 06:15:35.543848 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 06:15:35.559916 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 06:15:35.571580 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 06:15:35.581167 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 06:15:35.590116 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 21 06:15:35.590505 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 21 06:15:35.601445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 06:15:36.865348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:15:36.876602 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 06:15:36.887090 systemd[1]: Startup finished in 12.918s (kernel) + 15.819s (initrd) + 9.438s (userspace) = 38.177s. Jan 21 06:15:37.210434 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:15:38.150880 kubelet[1689]: E0121 06:15:38.150358 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:15:38.155466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:15:38.155993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:15:38.157040 systemd[1]: kubelet.service: Consumed 1.252s CPU time, 269.4M memory peak. Jan 21 06:15:44.042514 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 06:15:44.052543 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:53026.service - OpenSSH per-connection server daemon (10.0.0.1:53026). Jan 21 06:15:44.737286 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 53026 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:44.749518 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:44.832017 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 06:15:44.848026 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 06:15:44.910986 systemd-logind[1571]: New session 1 of user core. Jan 21 06:15:44.993352 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 06:15:45.017259 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 06:15:45.128168 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:45.187309 systemd-logind[1571]: New session 2 of user core. Jan 21 06:15:45.835179 systemd[1708]: Queued start job for default target default.target. Jan 21 06:15:45.866223 systemd[1708]: Created slice app.slice - User Application Slice. Jan 21 06:15:45.867434 systemd[1708]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 06:15:45.869802 systemd[1708]: Reached target paths.target - Paths. Jan 21 06:15:45.871336 systemd[1708]: Reached target timers.target - Timers. Jan 21 06:15:45.889053 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 06:15:45.894355 systemd[1708]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 06:15:45.979554 systemd[1708]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 06:15:45.991208 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 06:15:45.991484 systemd[1708]: Reached target sockets.target - Sockets. Jan 21 06:15:45.993588 systemd[1708]: Reached target basic.target - Basic System. Jan 21 06:15:45.994488 systemd[1708]: Reached target default.target - Main User Target. Jan 21 06:15:45.994539 systemd[1708]: Startup finished in 756ms. Jan 21 06:15:45.997173 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 06:15:46.017371 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 06:15:46.111334 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:50716.service - OpenSSH per-connection server daemon (10.0.0.1:50716). Jan 21 06:15:46.401958 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 50716 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:46.409024 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:46.458279 systemd-logind[1571]: New session 3 of user core. Jan 21 06:15:46.483498 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 06:15:46.647344 sshd[1726]: Connection closed by 10.0.0.1 port 50716 Jan 21 06:15:46.652500 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jan 21 06:15:46.688400 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:50716.service: Deactivated successfully. Jan 21 06:15:46.697119 systemd[1]: session-3.scope: Deactivated successfully. Jan 21 06:15:46.717229 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Jan 21 06:15:46.722497 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:50728.service - OpenSSH per-connection server daemon (10.0.0.1:50728). Jan 21 06:15:46.733176 systemd-logind[1571]: Removed session 3. Jan 21 06:15:47.091021 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 50728 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:47.108093 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:47.164152 systemd-logind[1571]: New session 4 of user core. Jan 21 06:15:47.181386 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 06:15:47.316160 sshd[1736]: Connection closed by 10.0.0.1 port 50728 Jan 21 06:15:47.323010 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 21 06:15:47.406246 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:50728.service: Deactivated successfully. Jan 21 06:15:47.435451 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 06:15:47.462008 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Jan 21 06:15:47.489392 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:50738.service - OpenSSH per-connection server daemon (10.0.0.1:50738). Jan 21 06:15:47.494043 systemd-logind[1571]: Removed session 4. Jan 21 06:15:47.898244 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 50738 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:47.906141 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:47.964433 systemd-logind[1571]: New session 5 of user core. Jan 21 06:15:47.985224 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 06:15:48.132272 sshd[1746]: Connection closed by 10.0.0.1 port 50738 Jan 21 06:15:48.134206 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 21 06:15:48.213214 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:50738.service: Deactivated successfully. Jan 21 06:15:48.220464 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 06:15:48.223159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 06:15:48.259314 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Jan 21 06:15:48.278297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:15:48.284204 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:50752.service - OpenSSH per-connection server daemon (10.0.0.1:50752). Jan 21 06:15:48.295243 systemd-logind[1571]: Removed session 5. Jan 21 06:15:48.610394 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 50752 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:48.620387 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:48.662056 systemd-logind[1571]: New session 6 of user core. Jan 21 06:15:48.686257 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 06:15:48.918454 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 06:15:48.920402 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 06:15:49.030262 sudo[1760]: pam_unix(sudo:session): session closed for user root Jan 21 06:15:49.037264 sshd[1759]: Connection closed by 10.0.0.1 port 50752 Jan 21 06:15:49.040249 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 21 06:15:49.142203 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:50752.service: Deactivated successfully. Jan 21 06:15:49.167408 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 06:15:49.190406 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Jan 21 06:15:49.226549 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:50766.service - OpenSSH per-connection server daemon (10.0.0.1:50766). Jan 21 06:15:49.238123 systemd-logind[1571]: Removed session 6. Jan 21 06:15:49.515575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:15:49.574334 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:15:49.674459 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:49.690241 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:49.752792 systemd-logind[1571]: New session 7 of user core. Jan 21 06:15:49.771251 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 06:15:49.924555 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 06:15:49.927382 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 06:15:50.093179 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 21 06:15:50.174177 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 06:15:50.179300 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 06:15:50.272509 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 06:15:50.278425 kubelet[1774]: E0121 06:15:50.276488 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:15:50.289457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:15:50.290068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:15:50.303460 systemd[1]: kubelet.service: Consumed 632ms CPU time, 110.3M memory peak. Jan 21 06:15:50.988000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 06:15:50.995579 augenrules[1811]: No rules Jan 21 06:15:51.000122 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 06:15:51.000615 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 06:15:51.008125 sudo[1783]: pam_unix(sudo:session): session closed for user root Jan 21 06:15:51.034251 kernel: kauditd_printk_skb: 123 callbacks suppressed Jan 21 06:15:51.034341 kernel: audit: type=1305 audit(1768976150.988:229): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 06:15:51.034122 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 21 06:15:51.035106 sshd[1782]: Connection closed by 10.0.0.1 port 50766 Jan 21 06:15:50.988000 audit[1811]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc594dd40 a2=420 a3=0 items=0 ppid=1791 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:51.250117 kernel: audit: type=1300 audit(1768976150.988:229): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc594dd40 a2=420 a3=0 items=0 ppid=1791 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:51.250216 kernel: audit: type=1327 audit(1768976150.988:229): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 06:15:50.988000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 06:15:51.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.380342 kernel: audit: type=1130 audit(1768976151.000:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.403154 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:50766.service: Deactivated successfully. Jan 21 06:15:51.428840 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 06:15:51.436013 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Jan 21 06:15:51.451308 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Jan 21 06:15:51.458137 kernel: audit: type=1131 audit(1768976151.000:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.001000 audit[1783]: USER_END pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.464478 systemd-logind[1571]: Removed session 7. Jan 21 06:15:51.566601 kernel: audit: type=1106 audit(1768976151.001:232): pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.569612 kernel: audit: type=1104 audit(1768976151.001:233): pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.001000 audit[1783]: CRED_DISP pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:51.054000 audit[1767]: USER_END pid=1767 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:51.874145 kernel: audit: type=1106 audit(1768976151.054:234): pid=1767 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:51.055000 audit[1767]: CRED_DISP pid=1767 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:51.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.136:22-10.0.0.1:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.171174 kernel: audit: type=1104 audit(1768976151.055:235): pid=1767 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:52.171286 kernel: audit: type=1131 audit(1768976151.403:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.136:22-10.0.0.1:50766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.171473 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:15:51.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.164000 audit[1820]: USER_ACCT pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:52.178000 audit[1820]: CRED_ACQ pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:52.178000 audit[1820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7e1518c0 a2=3 a3=0 items=0 ppid=1 pid=1820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:15:52.178000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:15:52.187559 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:15:52.277608 systemd-logind[1571]: New session 8 of user core. Jan 21 06:15:52.291375 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 06:15:52.349000 audit[1820]: USER_START pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:52.396000 audit[1824]: CRED_ACQ pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:15:52.857000 audit[1825]: USER_ACCT pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.863368 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 06:15:52.862000 audit[1825]: CRED_REFR pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.866000 audit[1825]: USER_START pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:15:52.867439 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 06:15:55.548080 kernel: hrtimer: interrupt took 3142925 ns Jan 21 06:15:57.757451 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 06:15:57.892345 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 06:16:00.514271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 06:16:00.567575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:16:02.710862 dockerd[1847]: time="2026-01-21T06:16:02.705366157Z" level=info msg="Starting up" Jan 21 06:16:02.723421 dockerd[1847]: time="2026-01-21T06:16:02.723389037Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 06:16:03.275614 dockerd[1847]: time="2026-01-21T06:16:03.275561073Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 06:16:03.834583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:16:03.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:03.853204 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 21 06:16:03.853310 kernel: audit: type=1130 audit(1768976163.836:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:03.963570 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:16:04.028468 dockerd[1847]: time="2026-01-21T06:16:04.027532385Z" level=info msg="Loading containers: start." Jan 21 06:16:04.171395 kernel: Initializing XFRM netlink socket Jan 21 06:16:04.752965 kubelet[1880]: E0121 06:16:04.750254 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:16:04.765413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:16:04.766324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:16:04.771952 systemd[1]: kubelet.service: Consumed 1.023s CPU time, 110.2M memory peak. Jan 21 06:16:04.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:04.833957 kernel: audit: type=1131 audit(1768976164.771:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:06.167000 audit[1919]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.167000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe953c3560 a2=0 a3=0 items=0 ppid=1847 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.420379 kernel: audit: type=1325 audit(1768976166.167:248): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.420510 kernel: audit: type=1300 audit(1768976166.167:248): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe953c3560 a2=0 a3=0 items=0 ppid=1847 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 06:16:06.533536 kernel: audit: type=1327 audit(1768976166.167:248): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 06:16:06.538382 kernel: audit: type=1325 audit(1768976166.257:249): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.257000 audit[1921]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.581140 kernel: audit: type=1300 audit(1768976166.257:249): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd28cb9890 a2=0 a3=0 items=0 ppid=1847 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.257000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd28cb9890 a2=0 a3=0 items=0 ppid=1847 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.743064 kernel: audit: type=1327 audit(1768976166.257:249): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 06:16:06.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 06:16:06.348000 audit[1923]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.836181 kernel: audit: type=1325 audit(1768976166.348:250): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.836301 kernel: audit: type=1300 audit(1768976166.348:250): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd3cad310 a2=0 a3=0 items=0 ppid=1847 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.348000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd3cad310 a2=0 a3=0 items=0 ppid=1847 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 06:16:06.429000 audit[1925]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.429000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5e3438b0 a2=0 a3=0 items=0 ppid=1847 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 06:16:06.533000 audit[1927]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.533000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9dfc0ce0 a2=0 a3=0 items=0 ppid=1847 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.533000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 06:16:06.636000 audit[1929]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.636000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc20311c90 a2=0 a3=0 items=0 ppid=1847 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 06:16:06.742000 audit[1931]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.742000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1b5f7b50 a2=0 a3=0 items=0 ppid=1847 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 06:16:06.934000 audit[1933]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:06.934000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffecd281a60 a2=0 a3=0 items=0 ppid=1847 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:06.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 06:16:07.720000 audit[1936]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:07.720000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe435f3710 a2=0 a3=0 items=0 ppid=1847 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:07.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 06:16:07.801000 audit[1938]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:07.801000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd59a1faa0 a2=0 a3=0 items=0 ppid=1847 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:07.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 06:16:07.906000 audit[1940]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:07.906000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe2f0e9db0 a2=0 a3=0 items=0 ppid=1847 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:07.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 06:16:07.975000 audit[1942]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:07.975000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdb3988e90 a2=0 a3=0 items=0 ppid=1847 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:07.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 06:16:08.043000 audit[1944]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:08.043000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffce24ed530 a2=0 a3=0 items=0 ppid=1847 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:08.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 06:16:09.714000 audit[1974]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:09.761248 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 21 06:16:09.761522 kernel: audit: type=1325 audit(1768976169.714:261): table=nat:15 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:09.875957 kernel: audit: type=1300 audit(1768976169.714:261): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe00da50c0 a2=0 a3=0 items=0 ppid=1847 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:09.714000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe00da50c0 a2=0 a3=0 items=0 ppid=1847 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:09.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 06:16:09.830000 audit[1976]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.142171 kernel: audit: type=1327 audit(1768976169.714:261): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 06:16:10.142428 kernel: audit: type=1325 audit(1768976169.830:262): table=filter:16 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.217966 kernel: audit: type=1300 audit(1768976169.830:262): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb590bd70 a2=0 a3=0 items=0 ppid=1847 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:09.830000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb590bd70 a2=0 a3=0 items=0 ppid=1847 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:09.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 06:16:10.448589 kernel: audit: type=1327 audit(1768976169.830:262): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 06:16:10.449104 kernel: audit: type=1325 audit(1768976169.951:263): table=filter:17 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:09.951000 audit[1978]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:09.951000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1b3640c0 a2=0 a3=0 items=0 ppid=1847 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.806077 kernel: audit: type=1300 audit(1768976169.951:263): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1b3640c0 a2=0 a3=0 items=0 ppid=1847 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:09.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 06:16:10.904794 kernel: audit: type=1327 audit(1768976169.951:263): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 06:16:10.910046 kernel: audit: type=1325 audit(1768976170.036:264): table=filter:18 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.036000 audit[1980]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.036000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeba0762a0 a2=0 a3=0 items=0 ppid=1847 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 06:16:10.144000 audit[1982]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.144000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff4b6dc740 a2=0 a3=0 items=0 ppid=1847 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 06:16:10.229000 audit[1984]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.229000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe4701c00 a2=0 a3=0 items=0 ppid=1847 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 06:16:10.327000 audit[1986]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.327000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff523c960 a2=0 a3=0 items=0 ppid=1847 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 06:16:10.427000 audit[1988]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.427000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe3b5c9180 a2=0 a3=0 items=0 ppid=1847 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.427000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 06:16:10.897000 audit[1990]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:10.897000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc3bd1c4c0 a2=0 a3=0 items=0 ppid=1847 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:10.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 06:16:11.015000 audit[1992]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:11.015000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeb5c0a1e0 a2=0 a3=0 items=0 ppid=1847 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 06:16:11.190000 audit[1994]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:11.190000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe55b50da0 a2=0 a3=0 items=0 ppid=1847 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 06:16:11.357000 audit[1996]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:11.357000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd530d4160 a2=0 a3=0 items=0 ppid=1847 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 06:16:11.538000 audit[1998]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:11.538000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe435758e0 a2=0 a3=0 items=0 ppid=1847 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 06:16:11.874000 audit[2003]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:11.874000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedc356420 a2=0 a3=0 items=0 ppid=1847 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 06:16:11.977000 audit[2005]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:11.977000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc15cb9a80 a2=0 a3=0 items=0 ppid=1847 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:11.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 06:16:12.205000 audit[2007]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:12.205000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffed160ebb0 a2=0 a3=0 items=0 ppid=1847 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:12.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 06:16:12.452000 audit[2009]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:12.452000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff244c35e0 a2=0 a3=0 items=0 ppid=1847 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:12.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 06:16:12.588000 audit[2011]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:12.588000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd1ad2abf0 a2=0 a3=0 items=0 ppid=1847 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:12.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 06:16:12.746000 audit[2013]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:16:12.746000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4109e250 a2=0 a3=0 items=0 ppid=1847 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:12.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 06:16:13.410000 audit[2019]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:13.410000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe0096ab50 a2=0 a3=0 items=0 ppid=1847 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:13.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 06:16:13.629000 audit[2021]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:13.629000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffda00202e0 a2=0 a3=0 items=0 ppid=1847 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:13.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 06:16:14.482000 audit[2029]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:14.482000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc56226e10 a2=0 a3=0 items=0 ppid=1847 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:14.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 06:16:15.027151 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 21 06:16:15.027227 kernel: audit: type=1325 audit(1768976174.971:283): table=filter:37 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:14.971000 audit[2035]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:14.994235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 21 06:16:15.017317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:16:15.111315 kernel: audit: type=1300 audit(1768976174.971:283): arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd21949bb0 a2=0 a3=0 items=0 ppid=1847 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.111405 kernel: audit: type=1327 audit(1768976174.971:283): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 06:16:15.111450 kernel: audit: type=1325 audit(1768976175.071:284): table=filter:38 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.111470 kernel: audit: type=1300 audit(1768976175.071:284): arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcd21b35a0 a2=0 a3=0 items=0 ppid=1847 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.111488 kernel: audit: type=1327 audit(1768976175.071:284): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 06:16:14.971000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd21949bb0 a2=0 a3=0 items=0 ppid=1847 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:14.971000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 06:16:15.071000 audit[2038]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.071000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcd21b35a0 a2=0 a3=0 items=0 ppid=1847 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 06:16:15.193000 audit[2040]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.480356 systemd-networkd[1500]: docker0: Link UP Jan 21 06:16:15.580174 dockerd[1847]: time="2026-01-21T06:16:15.576174468Z" level=info msg="Loading containers: done." Jan 21 06:16:15.877215 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2231815387-merged.mount: Deactivated successfully. Jan 21 06:16:16.080604 kernel: audit: type=1325 audit(1768976175.193:285): table=filter:39 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.193000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff000cf880 a2=0 a3=0 items=0 ppid=1847 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 06:16:16.278918 dockerd[1847]: time="2026-01-21T06:16:16.278023680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 06:16:16.278918 dockerd[1847]: time="2026-01-21T06:16:16.278125840Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 06:16:16.278918 dockerd[1847]: time="2026-01-21T06:16:16.278241472Z" level=info msg="Initializing buildkit" Jan 21 06:16:16.371300 kernel: audit: type=1300 audit(1768976175.193:285): arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff000cf880 a2=0 a3=0 items=0 ppid=1847 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:16.376290 kernel: audit: type=1327 audit(1768976175.193:285): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 06:16:16.376326 kernel: audit: type=1325 audit(1768976175.262:286): table=filter:40 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.262000 audit[2042]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.262000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc63d01320 a2=0 a3=0 items=0 ppid=1847 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 06:16:15.428000 audit[2044]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:16:15.428000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc65520c0 a2=0 a3=0 items=0 ppid=1847 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:16:15.428000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 06:16:17.477010 dockerd[1847]: time="2026-01-21T06:16:17.474489222Z" level=info msg="Completed buildkit initialization" Jan 21 06:16:17.627446 dockerd[1847]: time="2026-01-21T06:16:17.624212255Z" level=info msg="Daemon has completed initialization" Jan 21 06:16:17.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:17.636610 dockerd[1847]: time="2026-01-21T06:16:17.627610229Z" level=info msg="API listen on /run/docker.sock" Jan 21 06:16:17.635449 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 06:16:17.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:17.973579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:16:18.102494 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:16:19.139345 kubelet[2088]: E0121 06:16:19.138000 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:16:19.158036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:16:19.162043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:16:19.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:19.169421 systemd[1]: kubelet.service: Consumed 1.239s CPU time, 110.8M memory peak. Jan 21 06:16:19.727381 update_engine[1572]: I20260121 06:16:19.721405 1572 update_attempter.cc:509] Updating boot flags... Jan 21 06:16:26.269091 containerd[1588]: time="2026-01-21T06:16:26.268549012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 21 06:16:29.242196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 21 06:16:29.257174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:16:31.569206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226000084.mount: Deactivated successfully. Jan 21 06:16:31.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:31.902321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:16:31.953261 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 21 06:16:31.953608 kernel: audit: type=1130 audit(1768976191.907:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:32.073926 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:16:33.035139 kubelet[2137]: E0121 06:16:33.033040 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:16:33.069028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:16:33.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:33.077073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:16:33.079042 systemd[1]: kubelet.service: Consumed 1.722s CPU time, 108.6M memory peak. Jan 21 06:16:33.185968 kernel: audit: type=1131 audit(1768976193.077:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:44.355382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 21 06:16:44.446551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:16:48.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:48.437416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:16:48.527913 kernel: audit: type=1130 audit(1768976208.437:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:16:48.575268 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:16:52.364606 kubelet[2206]: E0121 06:16:52.360263 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:16:52.436011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:16:52.437007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:16:52.443012 systemd[1]: kubelet.service: Consumed 4.191s CPU time, 110.2M memory peak. Jan 21 06:16:52.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:16:52.558403 kernel: audit: type=1131 audit(1768976212.438:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:02.115433 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1022695919 wd_nsec: 1022695323 Jan 21 06:17:02.540007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 21 06:17:02.556599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:17:04.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:04.072508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:17:04.168444 kernel: audit: type=1130 audit(1768976224.072:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:04.200038 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:17:05.210539 kubelet[2222]: E0121 06:17:05.210298 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:17:05.224430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:17:05.226000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:17:05.230603 systemd[1]: kubelet.service: Consumed 1.580s CPU time, 110.4M memory peak. Jan 21 06:17:05.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:05.300995 kernel: audit: type=1131 audit(1768976225.227:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:08.429292 containerd[1588]: time="2026-01-21T06:17:08.427311360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:08.446800 containerd[1588]: time="2026-01-21T06:17:08.442251701Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30103808" Jan 21 06:17:08.466341 containerd[1588]: time="2026-01-21T06:17:08.464999720Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:08.540382 containerd[1588]: time="2026-01-21T06:17:08.539542702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:08.583316 containerd[1588]: time="2026-01-21T06:17:08.582570821Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 42.3139679s" Jan 21 06:17:08.587970 containerd[1588]: time="2026-01-21T06:17:08.583460088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 21 06:17:08.615415 containerd[1588]: time="2026-01-21T06:17:08.606372425Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 21 06:17:15.248381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 21 06:17:15.262281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:17:16.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:16.268517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:17:16.368983 kernel: audit: type=1130 audit(1768976236.267:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:16.378427 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:17:16.747442 kubelet[2244]: E0121 06:17:16.747180 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:17:16.772097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:17:16.772530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:17:16.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:16.780048 systemd[1]: kubelet.service: Consumed 890ms CPU time, 110.6M memory peak. Jan 21 06:17:16.841025 kernel: audit: type=1131 audit(1768976236.779:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:23.918294 containerd[1588]: time="2026-01-21T06:17:23.914513655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:23.925313 containerd[1588]: time="2026-01-21T06:17:23.923405665Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26011378" Jan 21 06:17:23.929951 containerd[1588]: time="2026-01-21T06:17:23.928549840Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:23.938455 containerd[1588]: time="2026-01-21T06:17:23.938426479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:23.941094 containerd[1588]: time="2026-01-21T06:17:23.939518894Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 15.330436923s" Jan 21 06:17:23.941094 containerd[1588]: time="2026-01-21T06:17:23.939552939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 21 06:17:23.951340 containerd[1588]: time="2026-01-21T06:17:23.949531382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 21 06:17:26.994611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 21 06:17:27.005071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:17:27.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:27.735238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:17:27.782946 kernel: audit: type=1130 audit(1768976247.735:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:27.809216 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:17:28.253015 kubelet[2265]: E0121 06:17:28.252565 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:17:28.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:28.277572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:17:28.278091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:17:28.279134 systemd[1]: kubelet.service: Consumed 814ms CPU time, 110.3M memory peak. Jan 21 06:17:28.382047 kernel: audit: type=1131 audit(1768976248.278:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:32.408885 containerd[1588]: time="2026-01-21T06:17:32.407319854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:32.419289 containerd[1588]: time="2026-01-21T06:17:32.419254422Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 21 06:17:32.434961 containerd[1588]: time="2026-01-21T06:17:32.434896654Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:32.453385 containerd[1588]: time="2026-01-21T06:17:32.451266215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:32.460919 containerd[1588]: time="2026-01-21T06:17:32.455484568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 8.505899021s" Jan 21 06:17:32.460919 containerd[1588]: time="2026-01-21T06:17:32.458372191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 21 06:17:32.461332 containerd[1588]: time="2026-01-21T06:17:32.461154873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 21 06:17:38.508531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 21 06:17:38.518220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:17:38.678470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741247705.mount: Deactivated successfully. Jan 21 06:17:39.592248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:17:39.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:39.641438 kernel: audit: type=1130 audit(1768976259.592:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:39.660512 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:17:40.144176 kubelet[2289]: E0121 06:17:40.143945 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:17:40.177404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:17:40.178133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:17:40.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:40.192491 systemd[1]: kubelet.service: Consumed 853ms CPU time, 110M memory peak. Jan 21 06:17:40.230950 kernel: audit: type=1131 audit(1768976260.179:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:48.481879 containerd[1588]: time="2026-01-21T06:17:48.478473713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:48.487508 containerd[1588]: time="2026-01-21T06:17:48.487474605Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 21 06:17:48.491153 containerd[1588]: time="2026-01-21T06:17:48.490059528Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:48.501291 containerd[1588]: time="2026-01-21T06:17:48.499928755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:17:48.504901 containerd[1588]: time="2026-01-21T06:17:48.504872564Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 16.043440144s" Jan 21 06:17:48.506540 containerd[1588]: time="2026-01-21T06:17:48.505110866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 21 06:17:48.507490 containerd[1588]: time="2026-01-21T06:17:48.507464359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 21 06:17:50.306138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 21 06:17:50.331544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:17:50.737211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067964751.mount: Deactivated successfully. Jan 21 06:17:52.056041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:17:52.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:52.107934 kernel: audit: type=1130 audit(1768976272.056:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:17:52.162244 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:17:53.131172 kubelet[2319]: E0121 06:17:53.130937 2319 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:17:53.143206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:17:53.144248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:17:53.145373 systemd[1]: kubelet.service: Consumed 2.245s CPU time, 110.8M memory peak. Jan 21 06:17:53.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:17:53.196441 kernel: audit: type=1131 audit(1768976273.142:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:02.671353 containerd[1588]: time="2026-01-21T06:18:02.667475799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:02.685189 containerd[1588]: time="2026-01-21T06:18:02.682256959Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20679551" Jan 21 06:18:02.695398 containerd[1588]: time="2026-01-21T06:18:02.695321499Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:02.719324 containerd[1588]: time="2026-01-21T06:18:02.718340213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:02.740239 containerd[1588]: time="2026-01-21T06:18:02.737531866Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 14.229929264s" Jan 21 06:18:02.740239 containerd[1588]: time="2026-01-21T06:18:02.737577242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 21 06:18:02.740464 containerd[1588]: time="2026-01-21T06:18:02.740431841Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 06:18:03.258391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 21 06:18:03.296301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:04.395433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973835431.mount: Deactivated successfully. Jan 21 06:18:04.522575 containerd[1588]: time="2026-01-21T06:18:04.521319169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 06:18:04.536516 containerd[1588]: time="2026-01-21T06:18:04.532590803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 06:18:04.565343 containerd[1588]: time="2026-01-21T06:18:04.565177337Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 06:18:04.633330 containerd[1588]: time="2026-01-21T06:18:04.633123255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 06:18:04.642316 containerd[1588]: time="2026-01-21T06:18:04.641181005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.90069402s" Jan 21 06:18:04.642316 containerd[1588]: time="2026-01-21T06:18:04.641361897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 21 06:18:04.664538 containerd[1588]: time="2026-01-21T06:18:04.660377638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 21 06:18:06.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:06.528426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:06.580053 kernel: audit: type=1130 audit(1768976286.529:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:06.596457 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:18:08.935926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745145358.mount: Deactivated successfully. Jan 21 06:18:09.431462 kubelet[2380]: E0121 06:18:09.426517 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:18:09.439516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:18:09.441279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:18:09.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:09.502224 kernel: audit: type=1131 audit(1768976289.446:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:09.448139 systemd[1]: kubelet.service: Consumed 2.163s CPU time, 109M memory peak. Jan 21 06:18:13.867508 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3630292707 wd_nsec: 3630290964 Jan 21 06:18:19.496477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 21 06:18:19.537065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:21.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:21.896477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:21.956189 kernel: audit: type=1130 audit(1768976301.896:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:22.164020 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:18:23.350953 kubelet[2450]: E0121 06:18:23.349244 2450 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:18:23.370514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:18:23.374303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:18:23.381158 systemd[1]: kubelet.service: Consumed 2.607s CPU time, 111.9M memory peak. Jan 21 06:18:23.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:23.441828 kernel: audit: type=1131 audit(1768976303.380:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:33.495473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 21 06:18:33.510104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:37.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:37.061453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:37.110092 kernel: audit: type=1130 audit(1768976317.061:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:37.124534 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:18:37.931241 kubelet[2470]: E0121 06:18:37.930188 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:18:37.943477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:18:37.944376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:18:37.946585 systemd[1]: kubelet.service: Consumed 2.896s CPU time, 110M memory peak. Jan 21 06:18:37.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:38.004485 kernel: audit: type=1131 audit(1768976317.945:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:38.763101 containerd[1588]: time="2026-01-21T06:18:38.762384012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:38.782393 containerd[1588]: time="2026-01-21T06:18:38.776600739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58915995" Jan 21 06:18:38.787201 containerd[1588]: time="2026-01-21T06:18:38.787148527Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:38.807246 containerd[1588]: time="2026-01-21T06:18:38.806083764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:18:38.817335 containerd[1588]: time="2026-01-21T06:18:38.817263475Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 34.152718343s" Jan 21 06:18:38.818357 containerd[1588]: time="2026-01-21T06:18:38.817473430Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 21 06:18:47.999138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 21 06:18:48.004585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:49.370519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:49.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:49.427536 kernel: audit: type=1130 audit(1768976329.373:311): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:49.444165 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 06:18:49.973219 kubelet[2511]: E0121 06:18:49.973170 2511 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 06:18:49.982307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 06:18:49.983247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 06:18:49.993380 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 111M memory peak. Jan 21 06:18:49.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:50.049404 kernel: audit: type=1131 audit(1768976329.991:312): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:53.194306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:53.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:53.200471 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 111M memory peak. Jan 21 06:18:53.212567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:53.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:53.269549 kernel: audit: type=1130 audit(1768976333.193:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:53.269958 kernel: audit: type=1131 audit(1768976333.193:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:53.358429 systemd[1]: Reload requested from client PID 2526 ('systemctl') (unit session-8.scope)... Jan 21 06:18:53.358449 systemd[1]: Reloading... Jan 21 06:18:53.762957 zram_generator::config[2572]: No configuration found. Jan 21 06:18:54.920270 systemd[1]: Reloading finished in 1560 ms. Jan 21 06:18:55.023000 audit: BPF prog-id=63 op=LOAD Jan 21 06:18:55.035000 audit: BPF prog-id=60 op=UNLOAD Jan 21 06:18:55.045221 kernel: audit: type=1334 audit(1768976335.023:315): prog-id=63 op=LOAD Jan 21 06:18:55.045276 kernel: audit: type=1334 audit(1768976335.035:316): prog-id=60 op=UNLOAD Jan 21 06:18:55.035000 audit: BPF prog-id=64 op=LOAD Jan 21 06:18:55.085354 kernel: audit: type=1334 audit(1768976335.035:317): prog-id=64 op=LOAD Jan 21 06:18:55.085481 kernel: audit: type=1334 audit(1768976335.035:318): prog-id=65 op=LOAD Jan 21 06:18:55.035000 audit: BPF prog-id=65 op=LOAD Jan 21 06:18:55.108178 kernel: audit: type=1334 audit(1768976335.035:319): prog-id=61 op=UNLOAD Jan 21 06:18:55.035000 audit: BPF prog-id=61 op=UNLOAD Jan 21 06:18:55.035000 audit: BPF prog-id=62 op=UNLOAD Jan 21 06:18:55.143593 kernel: audit: type=1334 audit(1768976335.035:320): prog-id=62 op=UNLOAD Jan 21 06:18:55.162361 kernel: audit: type=1334 audit(1768976335.039:321): prog-id=66 op=LOAD Jan 21 06:18:55.039000 audit: BPF prog-id=66 op=LOAD Jan 21 06:18:55.180201 kernel: audit: type=1334 audit(1768976335.039:322): prog-id=67 op=LOAD Jan 21 06:18:55.039000 audit: BPF prog-id=67 op=LOAD Jan 21 06:18:55.039000 audit: BPF prog-id=43 op=UNLOAD Jan 21 06:18:55.039000 audit: BPF prog-id=44 op=UNLOAD Jan 21 06:18:55.241452 kernel: audit: type=1334 audit(1768976335.039:323): prog-id=43 op=UNLOAD Jan 21 06:18:55.241535 kernel: audit: type=1334 audit(1768976335.039:324): prog-id=44 op=UNLOAD Jan 21 06:18:55.049000 audit: BPF prog-id=68 op=LOAD Jan 21 06:18:55.049000 audit: BPF prog-id=45 op=UNLOAD Jan 21 06:18:55.049000 audit: BPF prog-id=69 op=LOAD Jan 21 06:18:55.049000 audit: BPF prog-id=70 op=LOAD Jan 21 06:18:55.049000 audit: BPF prog-id=46 op=UNLOAD Jan 21 06:18:55.049000 audit: BPF prog-id=47 op=UNLOAD Jan 21 06:18:55.051000 audit: BPF prog-id=71 op=LOAD Jan 21 06:18:55.053000 audit: BPF prog-id=48 op=UNLOAD Jan 21 06:18:55.053000 audit: BPF prog-id=72 op=LOAD Jan 21 06:18:55.053000 audit: BPF prog-id=73 op=LOAD Jan 21 06:18:55.053000 audit: BPF prog-id=49 op=UNLOAD Jan 21 06:18:55.055000 audit: BPF prog-id=50 op=UNLOAD Jan 21 06:18:55.058000 audit: BPF prog-id=74 op=LOAD Jan 21 06:18:55.058000 audit: BPF prog-id=59 op=UNLOAD Jan 21 06:18:55.068000 audit: BPF prog-id=75 op=LOAD Jan 21 06:18:55.165000 audit: BPF prog-id=54 op=UNLOAD Jan 21 06:18:55.171000 audit: BPF prog-id=76 op=LOAD Jan 21 06:18:55.171000 audit: BPF prog-id=51 op=UNLOAD Jan 21 06:18:55.171000 audit: BPF prog-id=77 op=LOAD Jan 21 06:18:55.172000 audit: BPF prog-id=78 op=LOAD Jan 21 06:18:55.172000 audit: BPF prog-id=52 op=UNLOAD Jan 21 06:18:55.172000 audit: BPF prog-id=53 op=UNLOAD Jan 21 06:18:55.174000 audit: BPF prog-id=79 op=LOAD Jan 21 06:18:55.174000 audit: BPF prog-id=55 op=UNLOAD Jan 21 06:18:55.174000 audit: BPF prog-id=80 op=LOAD Jan 21 06:18:55.174000 audit: BPF prog-id=81 op=LOAD Jan 21 06:18:55.174000 audit: BPF prog-id=56 op=UNLOAD Jan 21 06:18:55.174000 audit: BPF prog-id=57 op=UNLOAD Jan 21 06:18:55.176000 audit: BPF prog-id=82 op=LOAD Jan 21 06:18:55.176000 audit: BPF prog-id=58 op=UNLOAD Jan 21 06:18:55.268615 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 06:18:55.269438 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 06:18:55.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 06:18:55.272523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:55.287281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:18:56.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:18:56.243530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:18:56.298540 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 06:18:57.050980 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:18:57.051572 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 06:18:57.051572 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:18:57.051572 kubelet[2619]: I0121 06:18:57.051340 2619 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 06:18:57.861474 kubelet[2619]: I0121 06:18:57.861264 2619 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 06:18:57.861474 kubelet[2619]: I0121 06:18:57.861458 2619 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 06:18:57.863019 kubelet[2619]: I0121 06:18:57.862515 2619 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 06:18:58.112615 kubelet[2619]: E0121 06:18:58.108941 2619 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 06:18:58.112615 kubelet[2619]: I0121 06:18:58.112506 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 06:18:58.168298 kubelet[2619]: I0121 06:18:58.167426 2619 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 06:18:58.238943 kubelet[2619]: I0121 06:18:58.238523 2619 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 06:18:58.240509 kubelet[2619]: I0121 06:18:58.239318 2619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 06:18:58.240509 kubelet[2619]: I0121 06:18:58.240046 2619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 06:18:58.240509 kubelet[2619]: I0121 06:18:58.240366 2619 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 06:18:58.240509 kubelet[2619]: I0121 06:18:58.240374 2619 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 06:18:58.243967 kubelet[2619]: I0121 06:18:58.243586 2619 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:18:58.261842 kubelet[2619]: I0121 06:18:58.261367 2619 kubelet.go:480] "Attempting to sync node with API server" Jan 21 06:18:58.261842 kubelet[2619]: I0121 06:18:58.261569 2619 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 06:18:58.262525 kubelet[2619]: I0121 06:18:58.261613 2619 kubelet.go:386] "Adding apiserver pod source" Jan 21 06:18:58.269499 kubelet[2619]: I0121 06:18:58.266960 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 06:18:58.277355 kubelet[2619]: E0121 06:18:58.276570 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 06:18:58.284400 kubelet[2619]: E0121 06:18:58.283386 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 06:18:58.294237 kubelet[2619]: I0121 06:18:58.293401 2619 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 06:18:58.300573 kubelet[2619]: I0121 06:18:58.298057 2619 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 06:18:58.300573 kubelet[2619]: W0121 06:18:58.300533 2619 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 06:18:58.330580 kubelet[2619]: I0121 06:18:58.330361 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 06:18:58.331611 kubelet[2619]: I0121 06:18:58.331351 2619 server.go:1289] "Started kubelet" Jan 21 06:18:58.336925 kubelet[2619]: I0121 06:18:58.336589 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 06:18:58.339917 kubelet[2619]: I0121 06:18:58.338533 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 06:18:58.340511 kubelet[2619]: I0121 06:18:58.340287 2619 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 06:18:58.343265 kubelet[2619]: I0121 06:18:58.342507 2619 server.go:317] "Adding debug handlers to kubelet server" Jan 21 06:18:58.352925 kubelet[2619]: I0121 06:18:58.350987 2619 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 06:18:58.352925 kubelet[2619]: I0121 06:18:58.351565 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 06:18:58.352925 kubelet[2619]: E0121 06:18:58.352300 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 21 06:18:58.358379 kubelet[2619]: E0121 06:18:58.351273 2619 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188caa9b76437740 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-21 06:18:58.330548032 +0000 UTC m=+1.997065703,LastTimestamp:2026-01-21 06:18:58.330548032 +0000 UTC m=+1.997065703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 21 06:18:58.364583 kubelet[2619]: I0121 06:18:58.345593 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 06:18:58.367980 kubelet[2619]: I0121 06:18:58.366614 2619 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 06:18:58.367980 kubelet[2619]: I0121 06:18:58.366993 2619 reconciler.go:26] "Reconciler: start to sync state" Jan 21 06:18:58.374962 kubelet[2619]: I0121 06:18:58.374934 2619 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 06:18:58.376527 kubelet[2619]: E0121 06:18:58.373998 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Jan 21 06:18:58.376527 kubelet[2619]: E0121 06:18:58.376305 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 06:18:58.379242 kubelet[2619]: E0121 06:18:58.378394 2619 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 06:18:58.382521 kubelet[2619]: I0121 06:18:58.381958 2619 factory.go:223] Registration of the containerd container factory successfully Jan 21 06:18:58.382521 kubelet[2619]: I0121 06:18:58.382266 2619 factory.go:223] Registration of the systemd container factory successfully Jan 21 06:18:58.452512 kubelet[2619]: E0121 06:18:58.452484 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 21 06:18:58.451000 audit[2640]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.451000 audit[2640]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd3837d960 a2=0 a3=0 items=0 ppid=2619 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 06:18:58.468540 kubelet[2619]: I0121 06:18:58.468507 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 06:18:58.468540 kubelet[2619]: I0121 06:18:58.468530 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 06:18:58.468540 kubelet[2619]: I0121 06:18:58.468549 2619 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:18:58.476000 audit[2642]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.476000 audit[2642]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4286b9e0 a2=0 a3=0 items=0 ppid=2619 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 06:18:58.536000 audit[2644]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.536000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4a80de90 a2=0 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.536000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 06:18:58.557289 kubelet[2619]: E0121 06:18:58.556573 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 21 06:18:58.562795 kubelet[2619]: I0121 06:18:58.562602 2619 policy_none.go:49] "None policy: Start" Jan 21 06:18:58.564320 kubelet[2619]: I0121 06:18:58.563385 2619 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 06:18:58.564320 kubelet[2619]: I0121 06:18:58.563407 2619 state_mem.go:35] "Initializing new in-memory state store" Jan 21 06:18:58.581465 kubelet[2619]: E0121 06:18:58.581264 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Jan 21 06:18:58.581000 audit[2646]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.581000 audit[2646]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd30c1edb0 a2=0 a3=0 items=0 ppid=2619 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 06:18:58.613380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 06:18:58.644452 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 06:18:58.651000 audit[2649]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.651000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff86752ef0 a2=0 a3=0 items=0 ppid=2619 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 06:18:58.655415 kubelet[2619]: I0121 06:18:58.654960 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 06:18:58.657947 kubelet[2619]: E0121 06:18:58.657616 2619 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 21 06:18:58.667000 audit[2652]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.667000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd33c4200 a2=0 a3=0 items=0 ppid=2619 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 06:18:58.667915 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 06:18:58.672000 audit[2651]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:18:58.672000 audit[2651]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd2906020 a2=0 a3=0 items=0 ppid=2619 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 06:18:58.674936 kubelet[2619]: I0121 06:18:58.674606 2619 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 06:18:58.675059 kubelet[2619]: I0121 06:18:58.675044 2619 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 06:18:58.675593 kubelet[2619]: I0121 06:18:58.675576 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 06:18:58.676606 kubelet[2619]: I0121 06:18:58.675902 2619 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 06:18:58.677476 kubelet[2619]: E0121 06:18:58.677448 2619 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 06:18:58.682922 kubelet[2619]: E0121 06:18:58.680592 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 06:18:58.686965 kubelet[2619]: E0121 06:18:58.685377 2619 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 06:18:58.692379 kubelet[2619]: I0121 06:18:58.691948 2619 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 06:18:58.695004 kubelet[2619]: I0121 06:18:58.693247 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 06:18:58.695004 kubelet[2619]: I0121 06:18:58.693565 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 06:18:58.696000 audit[2653]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.696000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd44ba260 a2=0 a3=0 items=0 ppid=2619 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 06:18:58.706000 audit[2654]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:18:58.706000 audit[2654]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdac03ea90 a2=0 a3=0 items=0 ppid=2619 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 06:18:58.707952 kubelet[2619]: E0121 06:18:58.705489 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 06:18:58.707952 kubelet[2619]: E0121 06:18:58.705529 2619 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 21 06:18:58.717000 audit[2656]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:18:58.717000 audit[2656]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe062f1500 a2=0 a3=0 items=0 ppid=2619 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 06:18:58.720000 audit[2657]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:18:58.720000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff383f9e40 a2=0 a3=0 items=0 ppid=2619 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 06:18:58.738000 audit[2658]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:18:58.738000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe61c300a0 a2=0 a3=0 items=0 ppid=2619 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:58.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 06:18:58.805312 kubelet[2619]: I0121 06:18:58.804490 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:18:58.813080 kubelet[2619]: E0121 06:18:58.811417 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 21 06:18:58.845412 systemd[1]: Created slice kubepods-burstable-pod5371dcb95d2851f3d2c6b2ebc450a662.slice - libcontainer container kubepods-burstable-pod5371dcb95d2851f3d2c6b2ebc450a662.slice. Jan 21 06:18:58.867555 kubelet[2619]: E0121 06:18:58.867503 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:18:58.876410 kubelet[2619]: I0121 06:18:58.875323 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:18:58.876410 kubelet[2619]: I0121 06:18:58.875526 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:18:58.876410 kubelet[2619]: I0121 06:18:58.875558 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:18:58.876410 kubelet[2619]: I0121 06:18:58.875595 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:18:58.876410 kubelet[2619]: I0121 06:18:58.875937 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:18:58.877332 kubelet[2619]: I0121 06:18:58.876002 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:18:58.877332 kubelet[2619]: I0121 06:18:58.876031 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 21 06:18:58.877332 kubelet[2619]: I0121 06:18:58.876054 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:18:58.877332 kubelet[2619]: I0121 06:18:58.876073 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:18:58.892582 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 21 06:18:58.910336 kubelet[2619]: E0121 06:18:58.907983 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:18:58.917308 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 21 06:18:58.934274 kubelet[2619]: E0121 06:18:58.931011 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:18:58.992557 kubelet[2619]: E0121 06:18:58.991530 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Jan 21 06:18:59.020030 kubelet[2619]: I0121 06:18:59.019444 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:18:59.020509 kubelet[2619]: E0121 06:18:59.020331 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 21 06:18:59.172261 kubelet[2619]: E0121 06:18:59.171559 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:18:59.186578 containerd[1588]: time="2026-01-21T06:18:59.186071093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5371dcb95d2851f3d2c6b2ebc450a662,Namespace:kube-system,Attempt:0,}" Jan 21 06:18:59.212066 kubelet[2619]: E0121 06:18:59.210400 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:18:59.214931 containerd[1588]: time="2026-01-21T06:18:59.213579423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 21 06:18:59.233524 kubelet[2619]: E0121 06:18:59.233494 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:18:59.238603 containerd[1588]: time="2026-01-21T06:18:59.238541551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 21 06:18:59.432486 kubelet[2619]: I0121 06:18:59.431519 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:18:59.437602 kubelet[2619]: E0121 06:18:59.437551 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 21 06:18:59.470474 containerd[1588]: time="2026-01-21T06:18:59.470429544Z" level=info msg="connecting to shim c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11" address="unix:///run/containerd/s/f9349328292811e2df78c05a973db64cc658ddacb7e9bd6a257c2920cc246ec9" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:18:59.523609 containerd[1588]: time="2026-01-21T06:18:59.522445861Z" level=info msg="connecting to shim c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9" address="unix:///run/containerd/s/1c0ee36ffebc453921801ce91bf81d709d7a0dbda32f0693be71adcc244bc6c7" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:18:59.539604 containerd[1588]: time="2026-01-21T06:18:59.539555800Z" level=info msg="connecting to shim bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d" address="unix:///run/containerd/s/18c65e8113d2f5de6ccf732a5ea7790a9a2e59a8f183231559194fc4d3880fb0" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:18:59.568010 kubelet[2619]: E0121 06:18:59.567965 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 06:18:59.569284 kubelet[2619]: E0121 06:18:59.568568 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 06:18:59.697435 systemd[1]: Started cri-containerd-c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11.scope - libcontainer container c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11. Jan 21 06:18:59.782050 kubelet[2619]: E0121 06:18:59.780960 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 06:18:59.793519 systemd[1]: Started cri-containerd-c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9.scope - libcontainer container c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9. Jan 21 06:18:59.796608 kubelet[2619]: E0121 06:18:59.795099 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" Jan 21 06:18:59.820000 audit: BPF prog-id=83 op=LOAD Jan 21 06:18:59.829000 audit: BPF prog-id=84 op=LOAD Jan 21 06:18:59.829000 audit[2705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.829000 audit: BPF prog-id=84 op=UNLOAD Jan 21 06:18:59.829000 audit[2705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.833000 audit: BPF prog-id=85 op=LOAD Jan 21 06:18:59.833000 audit[2705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.834000 audit: BPF prog-id=86 op=LOAD Jan 21 06:18:59.834000 audit[2705]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.834000 audit: BPF prog-id=86 op=UNLOAD Jan 21 06:18:59.834000 audit[2705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.835000 audit: BPF prog-id=85 op=UNLOAD Jan 21 06:18:59.835000 audit[2705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.835000 audit: BPF prog-id=87 op=LOAD Jan 21 06:18:59.835000 audit[2705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2672 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338306430633937323934356565653234353164353232643531373462 Jan 21 06:18:59.854010 systemd[1]: Started cri-containerd-bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d.scope - libcontainer container bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d. Jan 21 06:18:59.864000 audit: BPF prog-id=88 op=LOAD Jan 21 06:18:59.870000 audit: BPF prog-id=89 op=LOAD Jan 21 06:18:59.870000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.871000 audit: BPF prog-id=89 op=UNLOAD Jan 21 06:18:59.871000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.878000 audit: BPF prog-id=90 op=LOAD Jan 21 06:18:59.878000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.878000 audit: BPF prog-id=91 op=LOAD Jan 21 06:18:59.878000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.879000 audit: BPF prog-id=91 op=UNLOAD Jan 21 06:18:59.879000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.879000 audit: BPF prog-id=90 op=UNLOAD Jan 21 06:18:59.879000 audit[2716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.880000 audit: BPF prog-id=92 op=LOAD Jan 21 06:18:59.880000 audit[2716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2685 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337353462333965646532623531623764383737356334633936303636 Jan 21 06:18:59.933000 audit: BPF prog-id=93 op=LOAD Jan 21 06:18:59.936000 audit: BPF prog-id=94 op=LOAD Jan 21 06:18:59.936000 audit[2735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.937000 audit: BPF prog-id=94 op=UNLOAD Jan 21 06:18:59.937000 audit[2735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.937000 audit: BPF prog-id=95 op=LOAD Jan 21 06:18:59.937000 audit[2735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.937000 audit: BPF prog-id=96 op=LOAD Jan 21 06:18:59.937000 audit[2735]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.937000 audit: BPF prog-id=96 op=UNLOAD Jan 21 06:18:59.937000 audit[2735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.937000 audit: BPF prog-id=95 op=UNLOAD Jan 21 06:18:59.937000 audit[2735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:18:59.941000 audit: BPF prog-id=97 op=LOAD Jan 21 06:18:59.941000 audit[2735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2691 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:18:59.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263336336636233663132343831316537343930643931666337383636 Jan 21 06:19:00.024453 kubelet[2619]: E0121 06:19:00.024070 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 06:19:00.122080 containerd[1588]: time="2026-01-21T06:19:00.120064626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11\"" Jan 21 06:19:00.134936 kubelet[2619]: E0121 06:19:00.132515 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:00.171389 containerd[1588]: time="2026-01-21T06:19:00.171339881Z" level=info msg="CreateContainer within sandbox \"c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 06:19:00.212534 kubelet[2619]: E0121 06:19:00.212498 2619 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 06:19:00.245514 kubelet[2619]: I0121 06:19:00.245390 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:19:00.252109 containerd[1588]: time="2026-01-21T06:19:00.250940329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5371dcb95d2851f3d2c6b2ebc450a662,Namespace:kube-system,Attempt:0,} returns sandbox id \"c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9\"" Jan 21 06:19:00.253033 kubelet[2619]: E0121 06:19:00.251043 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 21 06:19:00.256849 kubelet[2619]: E0121 06:19:00.256066 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:00.297436 containerd[1588]: time="2026-01-21T06:19:00.295969107Z" level=info msg="CreateContainer within sandbox \"c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 06:19:00.305031 containerd[1588]: time="2026-01-21T06:19:00.304413347Z" level=info msg="Container aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:00.310482 containerd[1588]: time="2026-01-21T06:19:00.309868822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d\"" Jan 21 06:19:00.312480 kubelet[2619]: E0121 06:19:00.311307 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:00.357450 containerd[1588]: time="2026-01-21T06:19:00.356583293Z" level=info msg="CreateContainer within sandbox \"bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 06:19:00.430523 containerd[1588]: time="2026-01-21T06:19:00.430475396Z" level=info msg="CreateContainer within sandbox \"c80d0c972945eee2451d522d5174bae23e49d612b46678b4579fbbced4cdfa11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174\"" Jan 21 06:19:00.457971 containerd[1588]: time="2026-01-21T06:19:00.456433914Z" level=info msg="Container 66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:00.458538 containerd[1588]: time="2026-01-21T06:19:00.458506565Z" level=info msg="StartContainer for \"aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174\"" Jan 21 06:19:00.472448 containerd[1588]: time="2026-01-21T06:19:00.472421967Z" level=info msg="connecting to shim aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174" address="unix:///run/containerd/s/f9349328292811e2df78c05a973db64cc658ddacb7e9bd6a257c2920cc246ec9" protocol=ttrpc version=3 Jan 21 06:19:00.512397 containerd[1588]: time="2026-01-21T06:19:00.512360437Z" level=info msg="CreateContainer within sandbox \"c754b39ede2b51b7d8775c4c960662accc8eadbe17c4c29b97d2d929d0ecc8a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb\"" Jan 21 06:19:00.518328 containerd[1588]: time="2026-01-21T06:19:00.518305076Z" level=info msg="StartContainer for \"66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb\"" Jan 21 06:19:00.522354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622029628.mount: Deactivated successfully. Jan 21 06:19:00.529420 containerd[1588]: time="2026-01-21T06:19:00.528536402Z" level=info msg="connecting to shim 66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb" address="unix:///run/containerd/s/1c0ee36ffebc453921801ce91bf81d709d7a0dbda32f0693be71adcc244bc6c7" protocol=ttrpc version=3 Jan 21 06:19:00.542411 containerd[1588]: time="2026-01-21T06:19:00.541535248Z" level=info msg="Container bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:00.614055 containerd[1588]: time="2026-01-21T06:19:00.610963172Z" level=info msg="CreateContainer within sandbox \"bc3c6cb3f124811e7490d91fc7866825f07af38b43ba361fb8836c3c9994224d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea\"" Jan 21 06:19:00.625435 containerd[1588]: time="2026-01-21T06:19:00.624944647Z" level=info msg="StartContainer for \"bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea\"" Jan 21 06:19:00.630300 containerd[1588]: time="2026-01-21T06:19:00.628582796Z" level=info msg="connecting to shim bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea" address="unix:///run/containerd/s/18c65e8113d2f5de6ccf732a5ea7790a9a2e59a8f183231559194fc4d3880fb0" protocol=ttrpc version=3 Jan 21 06:19:00.674569 systemd[1]: Started cri-containerd-aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174.scope - libcontainer container aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174. Jan 21 06:19:00.720411 systemd[1]: Started cri-containerd-66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb.scope - libcontainer container 66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb. Jan 21 06:19:00.763000 audit: BPF prog-id=98 op=LOAD Jan 21 06:19:00.798124 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 21 06:19:00.798370 kernel: audit: type=1334 audit(1768976340.763:393): prog-id=98 op=LOAD Jan 21 06:19:00.803518 systemd[1]: Started cri-containerd-bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea.scope - libcontainer container bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea. Jan 21 06:19:00.808000 audit: BPF prog-id=99 op=LOAD Jan 21 06:19:00.808000 audit[2798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.894612 kernel: audit: type=1334 audit(1768976340.808:394): prog-id=99 op=LOAD Jan 21 06:19:00.895025 kernel: audit: type=1300 audit(1768976340.808:394): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.895066 kernel: audit: type=1327 audit(1768976340.808:394): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.825000 audit: BPF prog-id=99 op=UNLOAD Jan 21 06:19:00.984935 kernel: audit: type=1334 audit(1768976340.825:395): prog-id=99 op=UNLOAD Jan 21 06:19:00.825000 audit[2798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.058329 kernel: audit: type=1300 audit(1768976340.825:395): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.058424 kernel: audit: type=1327 audit(1768976340.825:395): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:01.139962 kernel: audit: type=1334 audit(1768976340.863:396): prog-id=100 op=LOAD Jan 21 06:19:00.863000 audit: BPF prog-id=100 op=LOAD Jan 21 06:19:00.863000 audit[2798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.236470 kernel: audit: type=1300 audit(1768976340.863:396): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.236595 kernel: audit: type=1327 audit(1768976340.863:396): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.863000 audit: BPF prog-id=101 op=LOAD Jan 21 06:19:00.863000 audit[2798]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.864000 audit: BPF prog-id=101 op=UNLOAD Jan 21 06:19:00.864000 audit[2798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.864000 audit: BPF prog-id=100 op=UNLOAD Jan 21 06:19:00.864000 audit[2798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.864000 audit: BPF prog-id=102 op=LOAD Jan 21 06:19:00.864000 audit[2798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2672 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165633931313765383835396565643235353430666333353363666365 Jan 21 06:19:00.875000 audit: BPF prog-id=103 op=LOAD Jan 21 06:19:00.877000 audit: BPF prog-id=104 op=LOAD Jan 21 06:19:00.877000 audit[2815]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.877000 audit: BPF prog-id=104 op=UNLOAD Jan 21 06:19:00.877000 audit[2815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.879000 audit: BPF prog-id=105 op=LOAD Jan 21 06:19:00.879000 audit[2815]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.879000 audit: BPF prog-id=106 op=LOAD Jan 21 06:19:00.879000 audit: BPF prog-id=107 op=LOAD Jan 21 06:19:00.879000 audit[2815]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.880000 audit: BPF prog-id=107 op=UNLOAD Jan 21 06:19:00.880000 audit[2815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.880000 audit: BPF prog-id=105 op=UNLOAD Jan 21 06:19:00.880000 audit[2815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.880000 audit: BPF prog-id=108 op=LOAD Jan 21 06:19:00.880000 audit[2815]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=2691 pid=2815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262663930346634316431303664646630383634613161373061363936 Jan 21 06:19:00.881000 audit: BPF prog-id=109 op=LOAD Jan 21 06:19:00.881000 audit[2804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:00.881000 audit: BPF prog-id=109 op=UNLOAD Jan 21 06:19:00.881000 audit[2804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:00.882000 audit: BPF prog-id=110 op=LOAD Jan 21 06:19:00.882000 audit[2804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:00.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:01.129000 audit: BPF prog-id=111 op=LOAD Jan 21 06:19:01.129000 audit[2804]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:01.316000 audit: BPF prog-id=111 op=UNLOAD Jan 21 06:19:01.316000 audit[2804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:01.316000 audit: BPF prog-id=110 op=UNLOAD Jan 21 06:19:01.316000 audit[2804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:01.316000 audit: BPF prog-id=112 op=LOAD Jan 21 06:19:01.316000 audit[2804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2685 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:01.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613864396433353131386333316330303463616235613961343761 Jan 21 06:19:01.365980 containerd[1588]: time="2026-01-21T06:19:01.365930181Z" level=info msg="StartContainer for \"bbf904f41d106ddf0864a1a70a696f9826954e22d9b450d8ab8fc12f8cbf64ea\" returns successfully" Jan 21 06:19:01.425586 kubelet[2619]: E0121 06:19:01.415612 2619 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="3.2s" Jan 21 06:19:01.460335 containerd[1588]: time="2026-01-21T06:19:01.460080357Z" level=info msg="StartContainer for \"aec9117e8859eed25540fc353cfceaa17a5c23095f17d3dea9df1de913365174\" returns successfully" Jan 21 06:19:01.749321 containerd[1588]: time="2026-01-21T06:19:01.749053074Z" level=info msg="StartContainer for \"66a8d9d35118c31c004cab5a9a47af3dd0ee23dc0a59b34843ebece13494adbb\" returns successfully" Jan 21 06:19:01.798366 kubelet[2619]: E0121 06:19:01.795021 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 06:19:01.799415 kubelet[2619]: E0121 06:19:01.799383 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:01.802029 kubelet[2619]: E0121 06:19:01.801997 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:01.840078 kubelet[2619]: E0121 06:19:01.839524 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 06:19:01.881136 kubelet[2619]: I0121 06:19:01.880933 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:19:01.886945 kubelet[2619]: E0121 06:19:01.884945 2619 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Jan 21 06:19:01.896094 kubelet[2619]: E0121 06:19:01.895590 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:01.902580 kubelet[2619]: E0121 06:19:01.902388 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:01.916099 kubelet[2619]: E0121 06:19:01.915135 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:01.916099 kubelet[2619]: E0121 06:19:01.915476 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:02.058541 kubelet[2619]: E0121 06:19:02.057468 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 06:19:02.763141 kubelet[2619]: E0121 06:19:02.761028 2619 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 06:19:02.933921 kubelet[2619]: E0121 06:19:02.933051 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:02.933921 kubelet[2619]: E0121 06:19:02.933397 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:02.942477 kubelet[2619]: E0121 06:19:02.942006 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:02.942477 kubelet[2619]: E0121 06:19:02.942124 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:02.945437 kubelet[2619]: E0121 06:19:02.944525 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:02.947506 kubelet[2619]: E0121 06:19:02.945516 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:03.946957 kubelet[2619]: E0121 06:19:03.946020 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:03.946957 kubelet[2619]: E0121 06:19:03.946175 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:03.957025 kubelet[2619]: E0121 06:19:03.957006 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:03.957399 kubelet[2619]: E0121 06:19:03.957379 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:05.099925 kubelet[2619]: I0121 06:19:05.099542 2619 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:19:08.450210 kubelet[2619]: E0121 06:19:08.448017 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:08.452790 kubelet[2619]: E0121 06:19:08.452179 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:08.706470 kubelet[2619]: E0121 06:19:08.705913 2619 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 21 06:19:09.646888 kubelet[2619]: E0121 06:19:09.645135 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:09.651288 kubelet[2619]: E0121 06:19:09.651226 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:11.059538 kubelet[2619]: E0121 06:19:11.059171 2619 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 21 06:19:11.059538 kubelet[2619]: E0121 06:19:11.059292 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:11.990094 kubelet[2619]: E0121 06:19:11.990036 2619 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 21 06:19:12.104179 kubelet[2619]: I0121 06:19:12.103221 2619 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 21 06:19:12.158171 kubelet[2619]: E0121 06:19:12.155962 2619 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188caa9b76437740 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-21 06:18:58.330548032 +0000 UTC m=+1.997065703,LastTimestamp:2026-01-21 06:18:58.330548032 +0000 UTC m=+1.997065703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 21 06:19:12.169587 kubelet[2619]: I0121 06:19:12.167121 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:12.297303 kubelet[2619]: I0121 06:19:12.294016 2619 apiserver.go:52] "Watching apiserver" Jan 21 06:19:12.353602 kubelet[2619]: E0121 06:19:12.352254 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:12.353602 kubelet[2619]: I0121 06:19:12.352288 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:12.366097 kubelet[2619]: E0121 06:19:12.365172 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:12.366097 kubelet[2619]: I0121 06:19:12.365905 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:12.369125 kubelet[2619]: I0121 06:19:12.368940 2619 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 06:19:12.378970 kubelet[2619]: E0121 06:19:12.378603 2619 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:18.444791 kubelet[2619]: I0121 06:19:18.444738 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:18.457350 kubelet[2619]: E0121 06:19:18.457225 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:19.207331 kubelet[2619]: E0121 06:19:19.207243 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:19.377084 systemd[1]: Reload requested from client PID 2907 ('systemctl') (unit session-8.scope)... Jan 21 06:19:19.377141 systemd[1]: Reloading... Jan 21 06:19:19.487773 zram_generator::config[2953]: No configuration found. Jan 21 06:19:19.648184 kubelet[2619]: I0121 06:19:19.648145 2619 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:19.658870 kubelet[2619]: I0121 06:19:19.658175 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.658135291 podStartE2EDuration="1.658135291s" podCreationTimestamp="2026-01-21 06:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:19:18.70240206 +0000 UTC m=+22.368919741" watchObservedRunningTime="2026-01-21 06:19:19.658135291 +0000 UTC m=+23.324652963" Jan 21 06:19:19.658870 kubelet[2619]: E0121 06:19:19.658414 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:19.821177 systemd[1]: Reloading finished in 443 ms. Jan 21 06:19:19.865551 kubelet[2619]: I0121 06:19:19.865284 2619 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 06:19:19.865801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:19:19.884547 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 06:19:19.885113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:19:19.885252 systemd[1]: kubelet.service: Consumed 5.782s CPU time, 130.6M memory peak. Jan 21 06:19:19.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:19:19.889245 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 21 06:19:19.889312 kernel: audit: type=1131 audit(1768976359.884:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:19:19.889781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 06:19:19.891000 audit: BPF prog-id=113 op=LOAD Jan 21 06:19:19.908771 kernel: audit: type=1334 audit(1768976359.891:418): prog-id=113 op=LOAD Jan 21 06:19:19.908857 kernel: audit: type=1334 audit(1768976359.891:419): prog-id=76 op=UNLOAD Jan 21 06:19:19.891000 audit: BPF prog-id=76 op=UNLOAD Jan 21 06:19:19.913121 kernel: audit: type=1334 audit(1768976359.891:420): prog-id=114 op=LOAD Jan 21 06:19:19.891000 audit: BPF prog-id=114 op=LOAD Jan 21 06:19:19.891000 audit: BPF prog-id=115 op=LOAD Jan 21 06:19:19.923097 kernel: audit: type=1334 audit(1768976359.891:421): prog-id=115 op=LOAD Jan 21 06:19:19.923139 kernel: audit: type=1334 audit(1768976359.891:422): prog-id=77 op=UNLOAD Jan 21 06:19:19.891000 audit: BPF prog-id=77 op=UNLOAD Jan 21 06:19:19.928168 kernel: audit: type=1334 audit(1768976359.891:423): prog-id=78 op=UNLOAD Jan 21 06:19:19.891000 audit: BPF prog-id=78 op=UNLOAD Jan 21 06:19:19.892000 audit: BPF prog-id=116 op=LOAD Jan 21 06:19:19.936154 kernel: audit: type=1334 audit(1768976359.892:424): prog-id=116 op=LOAD Jan 21 06:19:19.936234 kernel: audit: type=1334 audit(1768976359.892:425): prog-id=68 op=UNLOAD Jan 21 06:19:19.892000 audit: BPF prog-id=68 op=UNLOAD Jan 21 06:19:19.892000 audit: BPF prog-id=117 op=LOAD Jan 21 06:19:19.944314 kernel: audit: type=1334 audit(1768976359.892:426): prog-id=117 op=LOAD Jan 21 06:19:19.892000 audit: BPF prog-id=118 op=LOAD Jan 21 06:19:19.892000 audit: BPF prog-id=69 op=UNLOAD Jan 21 06:19:19.892000 audit: BPF prog-id=70 op=UNLOAD Jan 21 06:19:19.893000 audit: BPF prog-id=119 op=LOAD Jan 21 06:19:19.893000 audit: BPF prog-id=120 op=LOAD Jan 21 06:19:19.893000 audit: BPF prog-id=66 op=UNLOAD Jan 21 06:19:19.894000 audit: BPF prog-id=67 op=UNLOAD Jan 21 06:19:19.896000 audit: BPF prog-id=121 op=LOAD Jan 21 06:19:19.896000 audit: BPF prog-id=79 op=UNLOAD Jan 21 06:19:19.896000 audit: BPF prog-id=122 op=LOAD Jan 21 06:19:19.896000 audit: BPF prog-id=123 op=LOAD Jan 21 06:19:19.896000 audit: BPF prog-id=80 op=UNLOAD Jan 21 06:19:19.896000 audit: BPF prog-id=81 op=UNLOAD Jan 21 06:19:19.899000 audit: BPF prog-id=124 op=LOAD Jan 21 06:19:19.899000 audit: BPF prog-id=63 op=UNLOAD Jan 21 06:19:19.899000 audit: BPF prog-id=125 op=LOAD Jan 21 06:19:19.899000 audit: BPF prog-id=126 op=LOAD Jan 21 06:19:19.899000 audit: BPF prog-id=64 op=UNLOAD Jan 21 06:19:19.899000 audit: BPF prog-id=65 op=UNLOAD Jan 21 06:19:19.901000 audit: BPF prog-id=127 op=LOAD Jan 21 06:19:19.901000 audit: BPF prog-id=82 op=UNLOAD Jan 21 06:19:19.902000 audit: BPF prog-id=128 op=LOAD Jan 21 06:19:19.902000 audit: BPF prog-id=75 op=UNLOAD Jan 21 06:19:19.903000 audit: BPF prog-id=129 op=LOAD Jan 21 06:19:19.903000 audit: BPF prog-id=71 op=UNLOAD Jan 21 06:19:19.903000 audit: BPF prog-id=130 op=LOAD Jan 21 06:19:19.903000 audit: BPF prog-id=131 op=LOAD Jan 21 06:19:19.903000 audit: BPF prog-id=72 op=UNLOAD Jan 21 06:19:19.904000 audit: BPF prog-id=73 op=UNLOAD Jan 21 06:19:19.948000 audit: BPF prog-id=132 op=LOAD Jan 21 06:19:19.948000 audit: BPF prog-id=74 op=UNLOAD Jan 21 06:19:20.236163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 06:19:20.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:19:20.260275 (kubelet)[2998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 06:19:20.376211 kubelet[2998]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:19:20.376211 kubelet[2998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 06:19:20.376211 kubelet[2998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:19:20.376211 kubelet[2998]: I0121 06:19:20.375913 2998 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 06:19:20.387222 kubelet[2998]: I0121 06:19:20.387117 2998 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 06:19:20.387222 kubelet[2998]: I0121 06:19:20.387183 2998 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 06:19:20.387407 kubelet[2998]: I0121 06:19:20.387386 2998 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 06:19:20.389034 kubelet[2998]: I0121 06:19:20.388950 2998 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 06:19:20.394841 kubelet[2998]: I0121 06:19:20.394431 2998 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 06:19:20.404327 kubelet[2998]: I0121 06:19:20.404241 2998 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 06:19:20.412524 kubelet[2998]: I0121 06:19:20.412349 2998 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 06:19:20.412941 kubelet[2998]: I0121 06:19:20.412846 2998 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 06:19:20.413153 kubelet[2998]: I0121 06:19:20.412927 2998 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 06:19:20.413305 kubelet[2998]: I0121 06:19:20.413161 2998 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 06:19:20.413305 kubelet[2998]: I0121 06:19:20.413177 2998 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 06:19:20.413305 kubelet[2998]: I0121 06:19:20.413231 2998 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:19:20.413814 kubelet[2998]: I0121 06:19:20.413523 2998 kubelet.go:480] "Attempting to sync node with API server" Jan 21 06:19:20.413814 kubelet[2998]: I0121 06:19:20.413573 2998 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 06:19:20.413814 kubelet[2998]: I0121 06:19:20.413604 2998 kubelet.go:386] "Adding apiserver pod source" Jan 21 06:19:20.413814 kubelet[2998]: I0121 06:19:20.413717 2998 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 06:19:20.414972 kubelet[2998]: I0121 06:19:20.414887 2998 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 06:19:20.416586 kubelet[2998]: I0121 06:19:20.415871 2998 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 06:19:20.426199 kubelet[2998]: I0121 06:19:20.426135 2998 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 06:19:20.426320 kubelet[2998]: I0121 06:19:20.426236 2998 server.go:1289] "Started kubelet" Jan 21 06:19:20.437775 kubelet[2998]: I0121 06:19:20.436999 2998 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 06:19:20.438197 kubelet[2998]: I0121 06:19:20.437910 2998 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 06:19:20.441302 kubelet[2998]: I0121 06:19:20.441225 2998 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 06:19:20.457269 kubelet[2998]: E0121 06:19:20.455393 2998 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 06:19:20.457269 kubelet[2998]: I0121 06:19:20.456872 2998 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 06:19:20.462300 kubelet[2998]: I0121 06:19:20.461818 2998 server.go:317] "Adding debug handlers to kubelet server" Jan 21 06:19:20.463241 kubelet[2998]: I0121 06:19:20.456910 2998 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 06:19:20.464247 kubelet[2998]: I0121 06:19:20.463719 2998 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 06:19:20.464247 kubelet[2998]: I0121 06:19:20.464129 2998 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 06:19:20.465242 kubelet[2998]: I0121 06:19:20.465044 2998 reconciler.go:26] "Reconciler: start to sync state" Jan 21 06:19:20.471096 kubelet[2998]: I0121 06:19:20.471009 2998 factory.go:223] Registration of the systemd container factory successfully Jan 21 06:19:20.471197 kubelet[2998]: I0121 06:19:20.471128 2998 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 06:19:20.474922 kubelet[2998]: I0121 06:19:20.474892 2998 factory.go:223] Registration of the containerd container factory successfully Jan 21 06:19:20.489536 kubelet[2998]: I0121 06:19:20.489199 2998 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 06:19:20.501939 kubelet[2998]: I0121 06:19:20.500723 2998 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 06:19:20.501939 kubelet[2998]: I0121 06:19:20.500851 2998 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 06:19:20.501939 kubelet[2998]: I0121 06:19:20.500879 2998 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 06:19:20.501939 kubelet[2998]: I0121 06:19:20.500890 2998 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 06:19:20.501939 kubelet[2998]: E0121 06:19:20.500995 2998 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 06:19:20.571263 kubelet[2998]: I0121 06:19:20.571139 2998 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 06:19:20.571263 kubelet[2998]: I0121 06:19:20.571206 2998 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 06:19:20.571263 kubelet[2998]: I0121 06:19:20.571231 2998 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:19:20.571420 kubelet[2998]: I0121 06:19:20.571401 2998 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 06:19:20.571546 kubelet[2998]: I0121 06:19:20.571413 2998 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 06:19:20.571546 kubelet[2998]: I0121 06:19:20.571511 2998 policy_none.go:49] "None policy: Start" Jan 21 06:19:20.571546 kubelet[2998]: I0121 06:19:20.571526 2998 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 06:19:20.571546 kubelet[2998]: I0121 06:19:20.571540 2998 state_mem.go:35] "Initializing new in-memory state store" Jan 21 06:19:20.572182 kubelet[2998]: I0121 06:19:20.572070 2998 state_mem.go:75] "Updated machine memory state" Jan 21 06:19:20.603200 kubelet[2998]: E0121 06:19:20.602189 2998 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 06:19:20.603819 kubelet[2998]: E0121 06:19:20.603729 2998 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 06:19:20.604035 kubelet[2998]: I0121 06:19:20.603984 2998 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 06:19:20.604035 kubelet[2998]: I0121 06:19:20.603999 2998 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 06:19:20.608365 kubelet[2998]: I0121 06:19:20.607277 2998 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 06:19:20.609215 kubelet[2998]: E0121 06:19:20.608884 2998 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 06:19:20.728291 kubelet[2998]: I0121 06:19:20.728252 2998 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 21 06:19:20.751032 kubelet[2998]: I0121 06:19:20.749897 2998 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 21 06:19:20.751032 kubelet[2998]: I0121 06:19:20.750024 2998 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 21 06:19:20.804164 kubelet[2998]: I0121 06:19:20.803932 2998 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:20.804554 kubelet[2998]: I0121 06:19:20.804421 2998 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.804611 kubelet[2998]: I0121 06:19:20.804591 2998 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:20.840112 kubelet[2998]: E0121 06:19:20.840025 2998 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.844148 kubelet[2998]: E0121 06:19:20.844043 2998 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:20.866894 kubelet[2998]: I0121 06:19:20.866761 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.866894 kubelet[2998]: I0121 06:19:20.866854 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.866894 kubelet[2998]: I0121 06:19:20.866899 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 21 06:19:20.867162 kubelet[2998]: I0121 06:19:20.866915 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:20.867162 kubelet[2998]: I0121 06:19:20.866929 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:20.867162 kubelet[2998]: I0121 06:19:20.866941 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5371dcb95d2851f3d2c6b2ebc450a662-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5371dcb95d2851f3d2c6b2ebc450a662\") " pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:20.867162 kubelet[2998]: I0121 06:19:20.866953 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.867162 kubelet[2998]: I0121 06:19:20.866967 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:20.867290 kubelet[2998]: I0121 06:19:20.866980 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 21 06:19:21.131492 kubelet[2998]: E0121 06:19:21.131012 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.141218 kubelet[2998]: E0121 06:19:21.141147 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.145098 kubelet[2998]: E0121 06:19:21.144842 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.420018 kubelet[2998]: I0121 06:19:21.419817 2998 apiserver.go:52] "Watching apiserver" Jan 21 06:19:21.465227 kubelet[2998]: I0121 06:19:21.465051 2998 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 06:19:21.534306 kubelet[2998]: E0121 06:19:21.534211 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.534927 kubelet[2998]: I0121 06:19:21.534677 2998 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:21.535245 kubelet[2998]: E0121 06:19:21.535076 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.549693 kubelet[2998]: E0121 06:19:21.549086 2998 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 21 06:19:21.551314 kubelet[2998]: E0121 06:19:21.551193 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:21.559013 kubelet[2998]: I0121 06:19:21.558854 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.558840447 podStartE2EDuration="2.558840447s" podCreationTimestamp="2026-01-21 06:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:19:21.557916104 +0000 UTC m=+1.278822665" watchObservedRunningTime="2026-01-21 06:19:21.558840447 +0000 UTC m=+1.279747007" Jan 21 06:19:21.637719 kubelet[2998]: I0121 06:19:21.636286 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.636263577 podStartE2EDuration="1.636263577s" podCreationTimestamp="2026-01-21 06:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:19:21.60242267 +0000 UTC m=+1.323329241" watchObservedRunningTime="2026-01-21 06:19:21.636263577 +0000 UTC m=+1.357170137" Jan 21 06:19:22.537141 kubelet[2998]: E0121 06:19:22.537004 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:22.537988 kubelet[2998]: E0121 06:19:22.537904 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:22.788048 kubelet[2998]: E0121 06:19:22.787792 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:23.544057 kubelet[2998]: E0121 06:19:23.543884 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:26.444544 kubelet[2998]: I0121 06:19:26.444437 2998 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 06:19:26.445144 containerd[1588]: time="2026-01-21T06:19:26.444999511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 06:19:26.445568 kubelet[2998]: I0121 06:19:26.445256 2998 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 06:19:27.518879 systemd[1]: Created slice kubepods-besteffort-pod94e72237_1b6d_4bf7_afd5_1e92184e0202.slice - libcontainer container kubepods-besteffort-pod94e72237_1b6d_4bf7_afd5_1e92184e0202.slice. Jan 21 06:19:27.524096 kubelet[2998]: I0121 06:19:27.523793 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vsqm\" (UniqueName: \"kubernetes.io/projected/94e72237-1b6d-4bf7-afd5-1e92184e0202-kube-api-access-6vsqm\") pod \"kube-proxy-tm69l\" (UID: \"94e72237-1b6d-4bf7-afd5-1e92184e0202\") " pod="kube-system/kube-proxy-tm69l" Jan 21 06:19:27.524096 kubelet[2998]: I0121 06:19:27.523883 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94e72237-1b6d-4bf7-afd5-1e92184e0202-kube-proxy\") pod \"kube-proxy-tm69l\" (UID: \"94e72237-1b6d-4bf7-afd5-1e92184e0202\") " pod="kube-system/kube-proxy-tm69l" Jan 21 06:19:27.524096 kubelet[2998]: I0121 06:19:27.523915 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94e72237-1b6d-4bf7-afd5-1e92184e0202-xtables-lock\") pod \"kube-proxy-tm69l\" (UID: \"94e72237-1b6d-4bf7-afd5-1e92184e0202\") " pod="kube-system/kube-proxy-tm69l" Jan 21 06:19:27.524096 kubelet[2998]: I0121 06:19:27.523935 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94e72237-1b6d-4bf7-afd5-1e92184e0202-lib-modules\") pod \"kube-proxy-tm69l\" (UID: \"94e72237-1b6d-4bf7-afd5-1e92184e0202\") " pod="kube-system/kube-proxy-tm69l" Jan 21 06:19:27.635734 systemd[1]: Created slice kubepods-besteffort-pod2de376a5_9b85_46c0_ad47_f816ae9a1613.slice - libcontainer container kubepods-besteffort-pod2de376a5_9b85_46c0_ad47_f816ae9a1613.slice. Jan 21 06:19:27.725217 kubelet[2998]: I0121 06:19:27.725103 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2de376a5-9b85-46c0-ad47-f816ae9a1613-var-lib-calico\") pod \"tigera-operator-7dcd859c48-f6x5n\" (UID: \"2de376a5-9b85-46c0-ad47-f816ae9a1613\") " pod="tigera-operator/tigera-operator-7dcd859c48-f6x5n" Jan 21 06:19:27.725217 kubelet[2998]: I0121 06:19:27.725167 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9j59\" (UniqueName: \"kubernetes.io/projected/2de376a5-9b85-46c0-ad47-f816ae9a1613-kube-api-access-x9j59\") pod \"tigera-operator-7dcd859c48-f6x5n\" (UID: \"2de376a5-9b85-46c0-ad47-f816ae9a1613\") " pod="tigera-operator/tigera-operator-7dcd859c48-f6x5n" Jan 21 06:19:27.829236 kubelet[2998]: E0121 06:19:27.829003 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:27.830505 containerd[1588]: time="2026-01-21T06:19:27.830201464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm69l,Uid:94e72237-1b6d-4bf7-afd5-1e92184e0202,Namespace:kube-system,Attempt:0,}" Jan 21 06:19:27.901604 containerd[1588]: time="2026-01-21T06:19:27.901497893Z" level=info msg="connecting to shim f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3" address="unix:///run/containerd/s/5e0b84e1c994462cfb89cd999e7b59482bf96871f38ab595e5a22297169581e7" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:19:27.945067 containerd[1588]: time="2026-01-21T06:19:27.945010944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-f6x5n,Uid:2de376a5-9b85-46c0-ad47-f816ae9a1613,Namespace:tigera-operator,Attempt:0,}" Jan 21 06:19:27.981897 systemd[1]: Started cri-containerd-f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3.scope - libcontainer container f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3. Jan 21 06:19:27.984318 containerd[1588]: time="2026-01-21T06:19:27.984288972Z" level=info msg="connecting to shim a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259" address="unix:///run/containerd/s/52eb7aeca9a073f6461f44d2e288442b382fe0a3bd034e8986681a588ce75243" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:19:28.007000 audit: BPF prog-id=133 op=LOAD Jan 21 06:19:28.012230 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 21 06:19:28.012324 kernel: audit: type=1334 audit(1768976368.007:459): prog-id=133 op=LOAD Jan 21 06:19:28.008000 audit: BPF prog-id=134 op=LOAD Jan 21 06:19:28.022842 kernel: audit: type=1334 audit(1768976368.008:460): prog-id=134 op=LOAD Jan 21 06:19:28.022892 kernel: audit: type=1300 audit(1768976368.008:460): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=134 op=UNLOAD Jan 21 06:19:28.064287 kernel: audit: type=1327 audit(1768976368.008:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.064433 kernel: audit: type=1334 audit(1768976368.008:461): prog-id=134 op=UNLOAD Jan 21 06:19:28.064467 kernel: audit: type=1300 audit(1768976368.008:461): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.080489 kernel: audit: type=1327 audit(1768976368.008:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=135 op=LOAD Jan 21 06:19:28.097865 kernel: audit: type=1334 audit(1768976368.008:462): prog-id=135 op=LOAD Jan 21 06:19:28.098142 kernel: audit: type=1300 audit(1768976368.008:462): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.114203 systemd[1]: Started cri-containerd-a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259.scope - libcontainer container a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259. Jan 21 06:19:28.126554 containerd[1588]: time="2026-01-21T06:19:28.126312292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tm69l,Uid:94e72237-1b6d-4bf7-afd5-1e92184e0202,Namespace:kube-system,Attempt:0,} returns sandbox id \"f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3\"" Jan 21 06:19:28.127324 kernel: audit: type=1327 audit(1768976368.008:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=136 op=LOAD Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=136 op=UNLOAD Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=135 op=UNLOAD Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.008000 audit: BPF prog-id=137 op=LOAD Jan 21 06:19:28.008000 audit[3077]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3066 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639313561373835393232303034633833666632646230303933316266 Jan 21 06:19:28.129167 kubelet[2998]: E0121 06:19:28.128412 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:28.135783 containerd[1588]: time="2026-01-21T06:19:28.135458098Z" level=info msg="CreateContainer within sandbox \"f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 06:19:28.150000 audit: BPF prog-id=138 op=LOAD Jan 21 06:19:28.152000 audit: BPF prog-id=139 op=LOAD Jan 21 06:19:28.152000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.152000 audit: BPF prog-id=139 op=UNLOAD Jan 21 06:19:28.152000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.152000 audit: BPF prog-id=140 op=LOAD Jan 21 06:19:28.152000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.154000 audit: BPF prog-id=141 op=LOAD Jan 21 06:19:28.154000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.154000 audit: BPF prog-id=141 op=UNLOAD Jan 21 06:19:28.154000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.154000 audit: BPF prog-id=140 op=UNLOAD Jan 21 06:19:28.154000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.154000 audit: BPF prog-id=142 op=LOAD Jan 21 06:19:28.154000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3099 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131376637626431313437383062356237666263393063623234343861 Jan 21 06:19:28.159725 containerd[1588]: time="2026-01-21T06:19:28.158290490Z" level=info msg="Container 927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:28.170922 containerd[1588]: time="2026-01-21T06:19:28.170890435Z" level=info msg="CreateContainer within sandbox \"f915a785922004c83ff2db00931bfc6d73d8c7c30afb51b0cb9f4f7ff10a60b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2\"" Jan 21 06:19:28.172798 containerd[1588]: time="2026-01-21T06:19:28.172604459Z" level=info msg="StartContainer for \"927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2\"" Jan 21 06:19:28.179737 containerd[1588]: time="2026-01-21T06:19:28.179593021Z" level=info msg="connecting to shim 927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2" address="unix:///run/containerd/s/5e0b84e1c994462cfb89cd999e7b59482bf96871f38ab595e5a22297169581e7" protocol=ttrpc version=3 Jan 21 06:19:28.234790 systemd[1]: Started cri-containerd-927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2.scope - libcontainer container 927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2. Jan 21 06:19:28.241041 containerd[1588]: time="2026-01-21T06:19:28.240992474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-f6x5n,Uid:2de376a5-9b85-46c0-ad47-f816ae9a1613,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259\"" Jan 21 06:19:28.247098 containerd[1588]: time="2026-01-21T06:19:28.247045418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 06:19:28.341000 audit: BPF prog-id=143 op=LOAD Jan 21 06:19:28.341000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3066 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373833356361356434386662343931663663363631383539393737 Jan 21 06:19:28.341000 audit: BPF prog-id=144 op=LOAD Jan 21 06:19:28.341000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3066 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373833356361356434386662343931663663363631383539393737 Jan 21 06:19:28.341000 audit: BPF prog-id=144 op=UNLOAD Jan 21 06:19:28.341000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373833356361356434386662343931663663363631383539393737 Jan 21 06:19:28.341000 audit: BPF prog-id=143 op=UNLOAD Jan 21 06:19:28.341000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3066 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373833356361356434386662343931663663363631383539393737 Jan 21 06:19:28.341000 audit: BPF prog-id=145 op=LOAD Jan 21 06:19:28.341000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3066 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373833356361356434386662343931663663363631383539393737 Jan 21 06:19:28.392296 containerd[1588]: time="2026-01-21T06:19:28.392245519Z" level=info msg="StartContainer for \"927835ca5d48fb491f6c661859977c29ba885cdcfc0f6f3dbffb63b59cde11f2\" returns successfully" Jan 21 06:19:28.416731 kubelet[2998]: E0121 06:19:28.416221 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:28.579202 kubelet[2998]: E0121 06:19:28.578795 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:28.579202 kubelet[2998]: E0121 06:19:28.579053 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:28.646915 kubelet[2998]: I0121 06:19:28.646552 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tm69l" podStartSLOduration=1.646532046 podStartE2EDuration="1.646532046s" podCreationTimestamp="2026-01-21 06:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:19:28.618780092 +0000 UTC m=+8.339686673" watchObservedRunningTime="2026-01-21 06:19:28.646532046 +0000 UTC m=+8.367438606" Jan 21 06:19:28.838000 audit[3216]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.838000 audit[3214]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:28.838000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb653b7b0 a2=0 a3=7ffeb653b79c items=0 ppid=3160 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 06:19:28.838000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0699c9b0 a2=0 a3=7ffd0699c99c items=0 ppid=3160 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 06:19:28.852000 audit[3219]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:28.852000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3ded26b0 a2=0 a3=7ffc3ded269c items=0 ppid=3160 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 06:19:28.859000 audit[3220]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.859000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe3df2630 a2=0 a3=7fffe3df261c items=0 ppid=3160 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 06:19:28.862000 audit[3221]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:28.862000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde3b78340 a2=0 a3=7ffde3b7832c items=0 ppid=3160 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 06:19:28.874000 audit[3223]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.874000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc481e6a90 a2=0 a3=7ffc481e6a7c items=0 ppid=3160 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 06:19:28.935000 audit[3224]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.935000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd7d185690 a2=0 a3=7ffd7d18567c items=0 ppid=3160 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 06:19:28.945000 audit[3226]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.945000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe27318a80 a2=0 a3=7ffe27318a6c items=0 ppid=3160 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 06:19:28.967000 audit[3229]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.967000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbdb57310 a2=0 a3=7fffbdb572fc items=0 ppid=3160 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 06:19:28.974000 audit[3230]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.974000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdabf45560 a2=0 a3=7ffdabf4554c items=0 ppid=3160 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 06:19:28.985000 audit[3232]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.985000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb2ad7d60 a2=0 a3=7fffb2ad7d4c items=0 ppid=3160 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.985000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 06:19:28.990000 audit[3233]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:28.990000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbec647a0 a2=0 a3=7ffdbec6478c items=0 ppid=3160 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:28.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 06:19:29.006000 audit[3235]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.006000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffd850820 a2=0 a3=7ffffd85080c items=0 ppid=3160 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 06:19:29.021000 audit[3238]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.021000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc285bbd00 a2=0 a3=7ffc285bbcec items=0 ppid=3160 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 06:19:29.024000 audit[3239]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.024000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe52aa9a10 a2=0 a3=7ffe52aa99fc items=0 ppid=3160 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 06:19:29.034000 audit[3241]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.034000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf8a1b1a0 a2=0 a3=7ffcf8a1b18c items=0 ppid=3160 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 06:19:29.038000 audit[3242]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.038000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5862b010 a2=0 a3=7fff5862affc items=0 ppid=3160 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 06:19:29.046000 audit[3244]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.046000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc53654f70 a2=0 a3=7ffc53654f5c items=0 ppid=3160 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.046000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 06:19:29.059000 audit[3247]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.059000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec9cd6550 a2=0 a3=7ffec9cd653c items=0 ppid=3160 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 06:19:29.078000 audit[3250]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.078000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd27891cd0 a2=0 a3=7ffd27891cbc items=0 ppid=3160 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 06:19:29.082000 audit[3251]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.082000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe78fd4860 a2=0 a3=7ffe78fd484c items=0 ppid=3160 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 06:19:29.092000 audit[3253]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.092000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffe0d21330 a2=0 a3=7fffe0d2131c items=0 ppid=3160 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 06:19:29.106000 audit[3256]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.106000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd94ff5cf0 a2=0 a3=7ffd94ff5cdc items=0 ppid=3160 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 06:19:29.110000 audit[3257]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.110000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc48bcaf30 a2=0 a3=7ffc48bcaf1c items=0 ppid=3160 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 06:19:29.121000 audit[3259]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 06:19:29.121000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd1237d5c0 a2=0 a3=7ffd1237d5ac items=0 ppid=3160 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 06:19:29.189000 audit[3265]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:29.189000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe663a3f60 a2=0 a3=7ffe663a3f4c items=0 ppid=3160 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.189000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:29.218000 audit[3265]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:29.218000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe663a3f60 a2=0 a3=7ffe663a3f4c items=0 ppid=3160 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:29.230000 audit[3270]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.230000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe0301c990 a2=0 a3=7ffe0301c97c items=0 ppid=3160 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 06:19:29.247000 audit[3272]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.247000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff1ed36680 a2=0 a3=7fff1ed3666c items=0 ppid=3160 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 06:19:29.272000 audit[3275]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.272000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb37bc150 a2=0 a3=7ffdb37bc13c items=0 ppid=3160 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 06:19:29.282000 audit[3276]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.282000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe804ca760 a2=0 a3=7ffe804ca74c items=0 ppid=3160 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 06:19:29.304000 audit[3278]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.304000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa5fcc400 a2=0 a3=7fffa5fcc3ec items=0 ppid=3160 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 06:19:29.313000 audit[3279]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.313000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda90b4430 a2=0 a3=7ffda90b441c items=0 ppid=3160 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 06:19:29.325000 audit[3281]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.325000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd530b08c0 a2=0 a3=7ffd530b08ac items=0 ppid=3160 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 06:19:29.341000 audit[3284]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.341000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffcd9ae620 a2=0 a3=7fffcd9ae60c items=0 ppid=3160 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 06:19:29.346000 audit[3285]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.346000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd0421a90 a2=0 a3=7ffdd0421a7c items=0 ppid=3160 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 06:19:29.356000 audit[3287]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.356000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdcb4c0730 a2=0 a3=7ffdcb4c071c items=0 ppid=3160 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 06:19:29.362000 audit[3288]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.362000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd36c42890 a2=0 a3=7ffd36c4287c items=0 ppid=3160 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 06:19:29.376000 audit[3290]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.376000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7496f420 a2=0 a3=7ffe7496f40c items=0 ppid=3160 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 06:19:29.396000 audit[3293]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.396000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd583821a0 a2=0 a3=7ffd5838218c items=0 ppid=3160 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 06:19:29.413000 audit[3296]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.413000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeed04fd40 a2=0 a3=7ffeed04fd2c items=0 ppid=3160 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 06:19:29.421000 audit[3297]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.421000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6de1ef90 a2=0 a3=7fff6de1ef7c items=0 ppid=3160 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 06:19:29.431000 audit[3299]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.431000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc56310f90 a2=0 a3=7ffc56310f7c items=0 ppid=3160 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 06:19:29.448000 audit[3302]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.448000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9cbffd70 a2=0 a3=7fff9cbffd5c items=0 ppid=3160 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 06:19:29.453000 audit[3303]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.453000 audit[3303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef85c5700 a2=0 a3=7ffef85c56ec items=0 ppid=3160 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 06:19:29.462000 audit[3305]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.462000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd9bdb4690 a2=0 a3=7ffd9bdb467c items=0 ppid=3160 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 06:19:29.464235 kubelet[2998]: E0121 06:19:29.463894 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:29.470000 audit[3306]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.470000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdefe3ac00 a2=0 a3=7ffdefe3abec items=0 ppid=3160 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 06:19:29.481000 audit[3308]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.481000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdddc18270 a2=0 a3=7ffdddc1825c items=0 ppid=3160 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 06:19:29.497000 audit[3311]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 06:19:29.497000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc47ae8220 a2=0 a3=7ffc47ae820c items=0 ppid=3160 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 06:19:29.509000 audit[3313]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 06:19:29.509000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcf6f720e0 a2=0 a3=7ffcf6f720cc items=0 ppid=3160 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.509000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:29.510000 audit[3313]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 06:19:29.510000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcf6f720e0 a2=0 a3=7ffcf6f720cc items=0 ppid=3160 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:29.510000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:29.586211 kubelet[2998]: E0121 06:19:29.585840 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:29.586934 kubelet[2998]: E0121 06:19:29.586466 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:29.989601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720377303.mount: Deactivated successfully. Jan 21 06:19:35.186440 containerd[1588]: time="2026-01-21T06:19:35.186333669Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:35.188540 containerd[1588]: time="2026-01-21T06:19:35.188485959Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 21 06:19:35.193519 containerd[1588]: time="2026-01-21T06:19:35.193441561Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:35.198381 containerd[1588]: time="2026-01-21T06:19:35.198208468Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:35.199594 containerd[1588]: time="2026-01-21T06:19:35.199454805Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.951891151s" Jan 21 06:19:35.199594 containerd[1588]: time="2026-01-21T06:19:35.199544040Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 21 06:19:35.215144 containerd[1588]: time="2026-01-21T06:19:35.215036435Z" level=info msg="CreateContainer within sandbox \"a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 06:19:35.237773 containerd[1588]: time="2026-01-21T06:19:35.237458033Z" level=info msg="Container e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:35.262043 containerd[1588]: time="2026-01-21T06:19:35.261915165Z" level=info msg="CreateContainer within sandbox \"a17f7bd114780b5b7fbc90cb2448a2b0d11c1db7fea47a66bc8ec7b4d3bf4259\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd\"" Jan 21 06:19:35.263774 containerd[1588]: time="2026-01-21T06:19:35.263445067Z" level=info msg="StartContainer for \"e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd\"" Jan 21 06:19:35.266934 containerd[1588]: time="2026-01-21T06:19:35.265970071Z" level=info msg="connecting to shim e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd" address="unix:///run/containerd/s/52eb7aeca9a073f6461f44d2e288442b382fe0a3bd034e8986681a588ce75243" protocol=ttrpc version=3 Jan 21 06:19:35.347232 systemd[1]: Started cri-containerd-e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd.scope - libcontainer container e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd. Jan 21 06:19:35.383000 audit: BPF prog-id=146 op=LOAD Jan 21 06:19:35.389798 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 21 06:19:35.390567 kernel: audit: type=1334 audit(1768976375.383:531): prog-id=146 op=LOAD Jan 21 06:19:35.384000 audit: BPF prog-id=147 op=LOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.426342 kernel: audit: type=1334 audit(1768976375.384:532): prog-id=147 op=LOAD Jan 21 06:19:35.426485 kernel: audit: type=1300 audit(1768976375.384:532): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.426524 kernel: audit: type=1327 audit(1768976375.384:532): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.445023 kernel: audit: type=1334 audit(1768976375.384:533): prog-id=147 op=UNLOAD Jan 21 06:19:35.384000 audit: BPF prog-id=147 op=UNLOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.471829 kernel: audit: type=1300 audit(1768976375.384:533): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.471961 kernel: audit: type=1327 audit(1768976375.384:533): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: BPF prog-id=148 op=LOAD Jan 21 06:19:35.505961 kernel: audit: type=1334 audit(1768976375.384:534): prog-id=148 op=LOAD Jan 21 06:19:35.506050 kernel: audit: type=1300 audit(1768976375.384:534): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.522004 containerd[1588]: time="2026-01-21T06:19:35.521876467Z" level=info msg="StartContainer for \"e0e0e3baace6c427ea73963968fd41e7d3b61a65bf12aa5ec0aac63aca0d97bd\" returns successfully" Jan 21 06:19:35.531094 kernel: audit: type=1327 audit(1768976375.384:534): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: BPF prog-id=149 op=LOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: BPF prog-id=149 op=UNLOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: BPF prog-id=148 op=UNLOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:35.384000 audit: BPF prog-id=150 op=LOAD Jan 21 06:19:35.384000 audit[3322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3099 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:35.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530653065336261616365366334323765613733393633393638666434 Jan 21 06:19:43.013890 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 06:19:43.014246 kernel: audit: type=1106 audit(1768976382.991:539): pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:19:42.991000 audit[1825]: USER_END pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:19:42.992611 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 21 06:19:42.991000 audit[1825]: CRED_DISP pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:19:43.029868 kernel: audit: type=1104 audit(1768976382.991:540): pid=1825 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 06:19:43.029951 sshd[1824]: Connection closed by 10.0.0.1 port 50776 Jan 21 06:19:43.030972 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jan 21 06:19:43.035000 audit[1820]: USER_END pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:19:43.035000 audit[1820]: CRED_DISP pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:19:43.079952 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:50776.service: Deactivated successfully. Jan 21 06:19:43.091578 kernel: audit: type=1106 audit(1768976383.035:541): pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:19:43.091825 kernel: audit: type=1104 audit(1768976383.035:542): pid=1820 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:19:43.093076 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 06:19:43.094045 systemd[1]: session-8.scope: Consumed 15.849s CPU time, 221.2M memory peak. Jan 21 06:19:43.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:19:43.115116 kernel: audit: type=1131 audit(1768976383.085:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:50776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:19:43.114427 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Jan 21 06:19:43.117599 systemd-logind[1571]: Removed session 8. Jan 21 06:19:43.791000 audit[3414]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.809049 kernel: audit: type=1325 audit(1768976383.791:544): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.791000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa01b8c70 a2=0 a3=7fffa01b8c5c items=0 ppid=3160 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.849868 kernel: audit: type=1300 audit(1768976383.791:544): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffa01b8c70 a2=0 a3=7fffa01b8c5c items=0 ppid=3160 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.850010 kernel: audit: type=1327 audit(1768976383.791:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:43.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:43.850000 audit[3414]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.877809 kernel: audit: type=1325 audit(1768976383.850:545): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.850000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa01b8c70 a2=0 a3=0 items=0 ppid=3160 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:43.909828 kernel: audit: type=1300 audit(1768976383.850:545): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa01b8c70 a2=0 a3=0 items=0 ppid=3160 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.916000 audit[3416]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.916000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc2d035f60 a2=0 a3=7ffc2d035f4c items=0 ppid=3160 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:43.921000 audit[3416]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:43.921000 audit[3416]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2d035f60 a2=0 a3=0 items=0 ppid=3160 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:43.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:46.235000 audit[3418]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:46.235000 audit[3418]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcc29c05c0 a2=0 a3=7ffcc29c05ac items=0 ppid=3160 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:46.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:46.252000 audit[3418]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:46.252000 audit[3418]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc29c05c0 a2=0 a3=0 items=0 ppid=3160 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:46.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:46.349000 audit[3420]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:46.349000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffea7944550 a2=0 a3=7ffea794453c items=0 ppid=3160 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:46.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:46.356000 audit[3420]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:46.356000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea7944550 a2=0 a3=0 items=0 ppid=3160 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:46.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:47.379000 audit[3422]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:47.379000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe332c86a0 a2=0 a3=7ffe332c868c items=0 ppid=3160 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:47.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:47.390000 audit[3422]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:47.390000 audit[3422]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe332c86a0 a2=0 a3=0 items=0 ppid=3160 pid=3422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:47.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:48.629000 audit[3424]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:48.644782 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 21 06:19:48.644890 kernel: audit: type=1325 audit(1768976388.629:554): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:48.629000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe58972950 a2=0 a3=7ffe5897293c items=0 ppid=3160 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:48.678798 kernel: audit: type=1300 audit(1768976388.629:554): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe58972950 a2=0 a3=7ffe5897293c items=0 ppid=3160 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:48.679030 kernel: audit: type=1327 audit(1768976388.629:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:48.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:48.690000 audit[3424]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:48.690000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe58972950 a2=0 a3=0 items=0 ppid=3160 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:48.706901 kubelet[2998]: I0121 06:19:48.705872 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-f6x5n" podStartSLOduration=14.748805346 podStartE2EDuration="21.705853619s" podCreationTimestamp="2026-01-21 06:19:27 +0000 UTC" firstStartedPulling="2026-01-21 06:19:28.246160565 +0000 UTC m=+7.967067127" lastFinishedPulling="2026-01-21 06:19:35.203208838 +0000 UTC m=+14.924115400" observedRunningTime="2026-01-21 06:19:35.668412092 +0000 UTC m=+15.389318703" watchObservedRunningTime="2026-01-21 06:19:48.705853619 +0000 UTC m=+28.426760230" Jan 21 06:19:48.736363 systemd[1]: Created slice kubepods-besteffort-podbb894753_8286_4b3c_a52c_df19241c26bf.slice - libcontainer container kubepods-besteffort-podbb894753_8286_4b3c_a52c_df19241c26bf.slice. Jan 21 06:19:48.742761 kernel: audit: type=1325 audit(1768976388.690:555): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:48.742833 kernel: audit: type=1300 audit(1768976388.690:555): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe58972950 a2=0 a3=0 items=0 ppid=3160 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:48.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:48.760981 kernel: audit: type=1327 audit(1768976388.690:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:48.768210 kubelet[2998]: I0121 06:19:48.767981 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chfb\" (UniqueName: \"kubernetes.io/projected/bb894753-8286-4b3c-a52c-df19241c26bf-kube-api-access-7chfb\") pod \"calico-typha-6b9c6c4f48-kzbws\" (UID: \"bb894753-8286-4b3c-a52c-df19241c26bf\") " pod="calico-system/calico-typha-6b9c6c4f48-kzbws" Jan 21 06:19:48.768210 kubelet[2998]: I0121 06:19:48.768105 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bb894753-8286-4b3c-a52c-df19241c26bf-typha-certs\") pod \"calico-typha-6b9c6c4f48-kzbws\" (UID: \"bb894753-8286-4b3c-a52c-df19241c26bf\") " pod="calico-system/calico-typha-6b9c6c4f48-kzbws" Jan 21 06:19:48.768511 kubelet[2998]: I0121 06:19:48.768212 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb894753-8286-4b3c-a52c-df19241c26bf-tigera-ca-bundle\") pod \"calico-typha-6b9c6c4f48-kzbws\" (UID: \"bb894753-8286-4b3c-a52c-df19241c26bf\") " pod="calico-system/calico-typha-6b9c6c4f48-kzbws" Jan 21 06:19:48.965453 systemd[1]: Created slice kubepods-besteffort-pod402849c8_3365_4889_8bee_b93131b414d6.slice - libcontainer container kubepods-besteffort-pod402849c8_3365_4889_8bee_b93131b414d6.slice. Jan 21 06:19:48.971452 kubelet[2998]: I0121 06:19:48.969931 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-flexvol-driver-host\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971452 kubelet[2998]: I0121 06:19:48.969978 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-policysync\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971452 kubelet[2998]: I0121 06:19:48.970004 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-var-lib-calico\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971452 kubelet[2998]: I0121 06:19:48.970028 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-cni-log-dir\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971452 kubelet[2998]: I0121 06:19:48.970050 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6kq\" (UniqueName: \"kubernetes.io/projected/402849c8-3365-4889-8bee-b93131b414d6-kube-api-access-8g6kq\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971857 kubelet[2998]: I0121 06:19:48.970078 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-cni-bin-dir\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.971857 kubelet[2998]: I0121 06:19:48.970098 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-cni-net-dir\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.974007 kubelet[2998]: I0121 06:19:48.973901 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/402849c8-3365-4889-8bee-b93131b414d6-node-certs\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.974007 kubelet[2998]: I0121 06:19:48.973955 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402849c8-3365-4889-8bee-b93131b414d6-tigera-ca-bundle\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.974007 kubelet[2998]: I0121 06:19:48.973983 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-var-run-calico\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.974007 kubelet[2998]: I0121 06:19:48.974005 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-xtables-lock\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:48.974280 kubelet[2998]: I0121 06:19:48.974029 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402849c8-3365-4889-8bee-b93131b414d6-lib-modules\") pod \"calico-node-bg4vn\" (UID: \"402849c8-3365-4889-8bee-b93131b414d6\") " pod="calico-system/calico-node-bg4vn" Jan 21 06:19:49.042504 kubelet[2998]: E0121 06:19:49.042284 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:49.044577 containerd[1588]: time="2026-01-21T06:19:49.043810824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9c6c4f48-kzbws,Uid:bb894753-8286-4b3c-a52c-df19241c26bf,Namespace:calico-system,Attempt:0,}" Jan 21 06:19:49.063272 kubelet[2998]: E0121 06:19:49.063026 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:19:49.088097 kubelet[2998]: E0121 06:19:49.087898 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.088097 kubelet[2998]: W0121 06:19:49.087986 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.088097 kubelet[2998]: E0121 06:19:49.088018 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.116803 kubelet[2998]: E0121 06:19:49.116603 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.116803 kubelet[2998]: W0121 06:19:49.116758 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.116803 kubelet[2998]: E0121 06:19:49.116783 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.129004 kubelet[2998]: E0121 06:19:49.128957 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.129004 kubelet[2998]: W0121 06:19:49.128978 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.129004 kubelet[2998]: E0121 06:19:49.128997 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.130983 kubelet[2998]: E0121 06:19:49.130910 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.130983 kubelet[2998]: W0121 06:19:49.130934 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.130983 kubelet[2998]: E0121 06:19:49.130956 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.132422 kubelet[2998]: E0121 06:19:49.132233 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.132422 kubelet[2998]: W0121 06:19:49.132294 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.132422 kubelet[2998]: E0121 06:19:49.132308 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.134047 kubelet[2998]: E0121 06:19:49.133022 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.134047 kubelet[2998]: W0121 06:19:49.133045 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.134047 kubelet[2998]: E0121 06:19:49.133067 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.134047 kubelet[2998]: E0121 06:19:49.133541 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.134047 kubelet[2998]: W0121 06:19:49.133553 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.134047 kubelet[2998]: E0121 06:19:49.133565 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.135258 kubelet[2998]: E0121 06:19:49.135055 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.135258 kubelet[2998]: W0121 06:19:49.135065 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.135258 kubelet[2998]: E0121 06:19:49.135076 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.138861 containerd[1588]: time="2026-01-21T06:19:49.138506514Z" level=info msg="connecting to shim e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e" address="unix:///run/containerd/s/210bf9b12c9b748fc8a84ac07bae5344a659002ab03befe8a6acc3dfbabb9afb" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:19:49.141369 kubelet[2998]: E0121 06:19:49.141089 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.141369 kubelet[2998]: W0121 06:19:49.141217 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.141369 kubelet[2998]: E0121 06:19:49.141237 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.145452 kubelet[2998]: E0121 06:19:49.144805 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.145452 kubelet[2998]: W0121 06:19:49.145581 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.145835 kubelet[2998]: E0121 06:19:49.145600 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.153946 kubelet[2998]: E0121 06:19:49.153867 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.154604 kubelet[2998]: W0121 06:19:49.154459 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.154604 kubelet[2998]: E0121 06:19:49.154591 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.156378 kubelet[2998]: E0121 06:19:49.156289 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.156378 kubelet[2998]: W0121 06:19:49.156362 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.156478 kubelet[2998]: E0121 06:19:49.156379 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.159739 kubelet[2998]: E0121 06:19:49.159532 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.159739 kubelet[2998]: W0121 06:19:49.159548 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.159739 kubelet[2998]: E0121 06:19:49.159561 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.160918 kubelet[2998]: E0121 06:19:49.160542 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.160918 kubelet[2998]: W0121 06:19:49.160902 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.162075 kubelet[2998]: E0121 06:19:49.161809 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.164381 kubelet[2998]: E0121 06:19:49.164310 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.164381 kubelet[2998]: W0121 06:19:49.164330 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.164381 kubelet[2998]: E0121 06:19:49.164345 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.166313 kubelet[2998]: E0121 06:19:49.166081 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.166313 kubelet[2998]: W0121 06:19:49.166098 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.166437 kubelet[2998]: E0121 06:19:49.166370 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.170354 kubelet[2998]: E0121 06:19:49.169858 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.170354 kubelet[2998]: W0121 06:19:49.169921 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.170354 kubelet[2998]: E0121 06:19:49.169935 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.173294 kubelet[2998]: E0121 06:19:49.172856 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.173852 kubelet[2998]: W0121 06:19:49.173048 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.174854 kubelet[2998]: E0121 06:19:49.174526 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.176938 kubelet[2998]: E0121 06:19:49.176867 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.177507 kubelet[2998]: W0121 06:19:49.177488 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.177587 kubelet[2998]: E0121 06:19:49.177571 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.180771 kubelet[2998]: E0121 06:19:49.179461 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.180771 kubelet[2998]: W0121 06:19:49.179473 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.180771 kubelet[2998]: E0121 06:19:49.179486 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.180978 kubelet[2998]: E0121 06:19:49.180834 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.180978 kubelet[2998]: W0121 06:19:49.180973 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.181059 kubelet[2998]: E0121 06:19:49.180988 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.181418 kubelet[2998]: I0121 06:19:49.181264 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/219deac5-c979-42b1-a796-a0c185470d95-kubelet-dir\") pod \"csi-node-driver-w4vl7\" (UID: \"219deac5-c979-42b1-a796-a0c185470d95\") " pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:19:49.181418 kubelet[2998]: E0121 06:19:49.181358 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.181418 kubelet[2998]: W0121 06:19:49.181367 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.181418 kubelet[2998]: E0121 06:19:49.181378 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.185123 kubelet[2998]: E0121 06:19:49.184885 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.185123 kubelet[2998]: W0121 06:19:49.184902 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.185123 kubelet[2998]: E0121 06:19:49.184914 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.185123 kubelet[2998]: I0121 06:19:49.184943 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/219deac5-c979-42b1-a796-a0c185470d95-socket-dir\") pod \"csi-node-driver-w4vl7\" (UID: \"219deac5-c979-42b1-a796-a0c185470d95\") " pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:19:49.185566 kubelet[2998]: E0121 06:19:49.185546 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.185835 kubelet[2998]: W0121 06:19:49.185815 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.185907 kubelet[2998]: E0121 06:19:49.185891 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.186492 kubelet[2998]: I0121 06:19:49.186477 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/219deac5-c979-42b1-a796-a0c185470d95-registration-dir\") pod \"csi-node-driver-w4vl7\" (UID: \"219deac5-c979-42b1-a796-a0c185470d95\") " pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:19:49.186864 kubelet[2998]: E0121 06:19:49.186848 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.186929 kubelet[2998]: W0121 06:19:49.186918 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.186978 kubelet[2998]: E0121 06:19:49.186968 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.187825 kubelet[2998]: E0121 06:19:49.187812 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.187889 kubelet[2998]: W0121 06:19:49.187878 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.187931 kubelet[2998]: E0121 06:19:49.187922 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.188802 kubelet[2998]: E0121 06:19:49.188787 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.188861 kubelet[2998]: W0121 06:19:49.188851 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.188901 kubelet[2998]: E0121 06:19:49.188892 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.189435 kubelet[2998]: I0121 06:19:49.189408 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/219deac5-c979-42b1-a796-a0c185470d95-varrun\") pod \"csi-node-driver-w4vl7\" (UID: \"219deac5-c979-42b1-a796-a0c185470d95\") " pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:19:49.189777 kubelet[2998]: E0121 06:19:49.189587 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.189777 kubelet[2998]: W0121 06:19:49.189602 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.189883 kubelet[2998]: E0121 06:19:49.189614 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.191932 kubelet[2998]: E0121 06:19:49.191601 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.191932 kubelet[2998]: W0121 06:19:49.191780 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.191932 kubelet[2998]: E0121 06:19:49.191796 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.192790 kubelet[2998]: E0121 06:19:49.192483 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.192790 kubelet[2998]: W0121 06:19:49.192496 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.192790 kubelet[2998]: E0121 06:19:49.192508 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.193551 kubelet[2998]: E0121 06:19:49.193501 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.193551 kubelet[2998]: W0121 06:19:49.193515 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.193551 kubelet[2998]: E0121 06:19:49.193527 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.194916 kubelet[2998]: E0121 06:19:49.194015 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.194916 kubelet[2998]: W0121 06:19:49.194027 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.194916 kubelet[2998]: E0121 06:19:49.194040 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.196002 kubelet[2998]: E0121 06:19:49.195975 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.196002 kubelet[2998]: W0121 06:19:49.195992 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.196510 kubelet[2998]: E0121 06:19:49.196004 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.197076 kubelet[2998]: E0121 06:19:49.196586 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.197076 kubelet[2998]: W0121 06:19:49.196597 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.197076 kubelet[2998]: E0121 06:19:49.196608 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.198463 kubelet[2998]: E0121 06:19:49.197729 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.198463 kubelet[2998]: W0121 06:19:49.197796 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.198463 kubelet[2998]: E0121 06:19:49.197809 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.261088 systemd[1]: Started cri-containerd-e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e.scope - libcontainer container e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e. Jan 21 06:19:49.278428 kubelet[2998]: E0121 06:19:49.278303 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:49.280037 containerd[1588]: time="2026-01-21T06:19:49.279977972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bg4vn,Uid:402849c8-3365-4889-8bee-b93131b414d6,Namespace:calico-system,Attempt:0,}" Jan 21 06:19:49.300444 kubelet[2998]: E0121 06:19:49.300402 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.300547 kubelet[2998]: W0121 06:19:49.300534 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.300739 kubelet[2998]: E0121 06:19:49.300615 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.301747 kubelet[2998]: E0121 06:19:49.301605 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.301810 kubelet[2998]: W0121 06:19:49.301798 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.301874 kubelet[2998]: E0121 06:19:49.301857 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.303084 kubelet[2998]: E0121 06:19:49.302913 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.303084 kubelet[2998]: W0121 06:19:49.302927 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.303084 kubelet[2998]: E0121 06:19:49.302940 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.303755 kubelet[2998]: E0121 06:19:49.303610 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.303820 kubelet[2998]: W0121 06:19:49.303808 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.303968 kubelet[2998]: E0121 06:19:49.303853 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.308596 kubelet[2998]: E0121 06:19:49.308494 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.308596 kubelet[2998]: W0121 06:19:49.308509 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.308596 kubelet[2998]: E0121 06:19:49.308527 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.309749 kubelet[2998]: E0121 06:19:49.309416 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.309749 kubelet[2998]: W0121 06:19:49.309434 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.309749 kubelet[2998]: E0121 06:19:49.309447 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.310783 kubelet[2998]: E0121 06:19:49.310417 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.310783 kubelet[2998]: W0121 06:19:49.310487 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.310783 kubelet[2998]: E0121 06:19:49.310508 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.311497 kubelet[2998]: E0121 06:19:49.311243 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.311497 kubelet[2998]: W0121 06:19:49.311300 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.311497 kubelet[2998]: E0121 06:19:49.311315 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.312439 kubelet[2998]: E0121 06:19:49.312230 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.312439 kubelet[2998]: W0121 06:19:49.312316 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.312439 kubelet[2998]: E0121 06:19:49.312333 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.314000 audit: BPF prog-id=151 op=LOAD Jan 21 06:19:49.316414 kubelet[2998]: E0121 06:19:49.314441 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.316414 kubelet[2998]: W0121 06:19:49.314465 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.316414 kubelet[2998]: E0121 06:19:49.314550 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.316414 kubelet[2998]: E0121 06:19:49.315958 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.316414 kubelet[2998]: W0121 06:19:49.315970 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.316414 kubelet[2998]: E0121 06:19:49.315983 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.318713 kubelet[2998]: E0121 06:19:49.317807 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.318713 kubelet[2998]: W0121 06:19:49.318261 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.318713 kubelet[2998]: E0121 06:19:49.318281 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.318713 kubelet[2998]: I0121 06:19:49.318371 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskfc\" (UniqueName: \"kubernetes.io/projected/219deac5-c979-42b1-a796-a0c185470d95-kube-api-access-jskfc\") pod \"csi-node-driver-w4vl7\" (UID: \"219deac5-c979-42b1-a796-a0c185470d95\") " pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:19:49.318713 kubelet[2998]: E0121 06:19:49.318559 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.318713 kubelet[2998]: W0121 06:19:49.318567 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.318713 kubelet[2998]: E0121 06:19:49.318575 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.319426 kubelet[2998]: E0121 06:19:49.319299 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.319426 kubelet[2998]: W0121 06:19:49.319375 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.319426 kubelet[2998]: E0121 06:19:49.319388 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.319980 kubelet[2998]: E0121 06:19:49.319847 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.319980 kubelet[2998]: W0121 06:19:49.319917 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.319980 kubelet[2998]: E0121 06:19:49.319930 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.321825 kubelet[2998]: E0121 06:19:49.321720 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.321825 kubelet[2998]: W0121 06:19:49.321790 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.321825 kubelet[2998]: E0121 06:19:49.321806 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.328859 kernel: audit: type=1334 audit(1768976389.314:556): prog-id=151 op=LOAD Jan 21 06:19:49.328917 kernel: audit: type=1334 audit(1768976389.315:557): prog-id=152 op=LOAD Jan 21 06:19:49.315000 audit: BPF prog-id=152 op=LOAD Jan 21 06:19:49.328977 kubelet[2998]: E0121 06:19:49.322442 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.328977 kubelet[2998]: W0121 06:19:49.322455 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.328977 kubelet[2998]: E0121 06:19:49.322471 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.328977 kubelet[2998]: E0121 06:19:49.326071 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.328977 kubelet[2998]: W0121 06:19:49.326091 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.328977 kubelet[2998]: E0121 06:19:49.326110 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.329792 kernel: audit: type=1300 audit(1768976389.315:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.332275 kubelet[2998]: E0121 06:19:49.329612 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.332275 kubelet[2998]: W0121 06:19:49.331947 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.332275 kubelet[2998]: E0121 06:19:49.331969 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.333861 kubelet[2998]: E0121 06:19:49.333561 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.333861 kubelet[2998]: W0121 06:19:49.333801 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.333861 kubelet[2998]: E0121 06:19:49.333822 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.335938 kubelet[2998]: E0121 06:19:49.335848 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.335938 kubelet[2998]: W0121 06:19:49.335930 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.336017 kubelet[2998]: E0121 06:19:49.335950 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.336321 containerd[1588]: time="2026-01-21T06:19:49.336124194Z" level=info msg="connecting to shim e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242" address="unix:///run/containerd/s/25a7b4f22c4b09614481c147da64642e367b577ea3c3a369c1aaadb752a763ca" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:19:49.338113 kubelet[2998]: E0121 06:19:49.337976 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.338113 kubelet[2998]: W0121 06:19:49.338058 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.338113 kubelet[2998]: E0121 06:19:49.338075 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.339751 kubelet[2998]: E0121 06:19:49.339073 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.339751 kubelet[2998]: W0121 06:19:49.339088 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.339751 kubelet[2998]: E0121 06:19:49.339101 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.377968 kernel: audit: type=1327 audit(1768976389.315:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=152 op=UNLOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=153 op=LOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=154 op=LOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=154 op=UNLOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=153 op=UNLOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.315000 audit: BPF prog-id=155 op=LOAD Jan 21 06:19:49.315000 audit[3473]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=3449 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537656130616231643762336162626131396434626631633966343164 Jan 21 06:19:49.421324 kubelet[2998]: E0121 06:19:49.421100 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.421324 kubelet[2998]: W0121 06:19:49.421217 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.421324 kubelet[2998]: E0121 06:19:49.421245 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.428022 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.434035 kubelet[2998]: W0121 06:19:49.428119 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.428225 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.428813 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.434035 kubelet[2998]: W0121 06:19:49.428827 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.428841 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.430852 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.434035 kubelet[2998]: W0121 06:19:49.430862 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.430874 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.434035 kubelet[2998]: E0121 06:19:49.433602 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.434510 kubelet[2998]: W0121 06:19:49.433614 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.434510 kubelet[2998]: E0121 06:19:49.433770 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.441419 systemd[1]: Started cri-containerd-e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242.scope - libcontainer container e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242. Jan 21 06:19:49.472048 kubelet[2998]: E0121 06:19:49.471585 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:49.472048 kubelet[2998]: W0121 06:19:49.471779 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:49.472048 kubelet[2998]: E0121 06:19:49.471799 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:49.476917 containerd[1588]: time="2026-01-21T06:19:49.476584345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9c6c4f48-kzbws,Uid:bb894753-8286-4b3c-a52c-df19241c26bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e\"" Jan 21 06:19:49.483000 audit: BPF prog-id=156 op=LOAD Jan 21 06:19:49.484000 audit: BPF prog-id=157 op=LOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=157 op=UNLOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=158 op=LOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=159 op=LOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=159 op=UNLOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=158 op=UNLOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.484000 audit: BPF prog-id=160 op=LOAD Jan 21 06:19:49.484000 audit[3552]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3536 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538303366363635306363343936336436356433316163373466333161 Jan 21 06:19:49.488114 kubelet[2998]: E0121 06:19:49.486291 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:49.496936 containerd[1588]: time="2026-01-21T06:19:49.496034410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 06:19:49.560835 containerd[1588]: time="2026-01-21T06:19:49.560743163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bg4vn,Uid:402849c8-3365-4889-8bee-b93131b414d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\"" Jan 21 06:19:49.562463 kubelet[2998]: E0121 06:19:49.562358 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:49.774000 audit[3593]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:49.774000 audit[3593]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeddce57b0 a2=0 a3=7ffeddce579c items=0 ppid=3160 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:49.780000 audit[3593]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3593 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:19:49.780000 audit[3593]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeddce57b0 a2=0 a3=0 items=0 ppid=3160 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:49.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:19:50.295886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269750387.mount: Deactivated successfully. Jan 21 06:19:50.503764 kubelet[2998]: E0121 06:19:50.503297 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:19:52.503065 kubelet[2998]: E0121 06:19:52.502906 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:19:52.750405 containerd[1588]: time="2026-01-21T06:19:52.750260355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:52.753370 containerd[1588]: time="2026-01-21T06:19:52.753082965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 21 06:19:52.755602 containerd[1588]: time="2026-01-21T06:19:52.755571770Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:52.759466 containerd[1588]: time="2026-01-21T06:19:52.759435450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:52.761356 containerd[1588]: time="2026-01-21T06:19:52.760990553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.264921218s" Jan 21 06:19:52.761356 containerd[1588]: time="2026-01-21T06:19:52.761098523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 21 06:19:52.764794 containerd[1588]: time="2026-01-21T06:19:52.764450031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 06:19:52.817960 containerd[1588]: time="2026-01-21T06:19:52.817032255Z" level=info msg="CreateContainer within sandbox \"e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 06:19:52.845353 containerd[1588]: time="2026-01-21T06:19:52.845186224Z" level=info msg="Container 3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:52.885283 containerd[1588]: time="2026-01-21T06:19:52.885028348Z" level=info msg="CreateContainer within sandbox \"e7ea0ab1d7b3abba19d4bf1c9f41d6cbca52060490494b15cee4239f41909d3e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61\"" Jan 21 06:19:52.888425 containerd[1588]: time="2026-01-21T06:19:52.888008350Z" level=info msg="StartContainer for \"3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61\"" Jan 21 06:19:52.890064 containerd[1588]: time="2026-01-21T06:19:52.889943807Z" level=info msg="connecting to shim 3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61" address="unix:///run/containerd/s/210bf9b12c9b748fc8a84ac07bae5344a659002ab03befe8a6acc3dfbabb9afb" protocol=ttrpc version=3 Jan 21 06:19:52.981100 systemd[1]: Started cri-containerd-3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61.scope - libcontainer container 3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61. Jan 21 06:19:53.050000 audit: BPF prog-id=161 op=LOAD Jan 21 06:19:53.054000 audit: BPF prog-id=162 op=LOAD Jan 21 06:19:53.054000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.054000 audit: BPF prog-id=162 op=UNLOAD Jan 21 06:19:53.054000 audit[3604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.055000 audit: BPF prog-id=163 op=LOAD Jan 21 06:19:53.055000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.058000 audit: BPF prog-id=164 op=LOAD Jan 21 06:19:53.058000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.058000 audit: BPF prog-id=164 op=UNLOAD Jan 21 06:19:53.058000 audit[3604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.060000 audit: BPF prog-id=163 op=UNLOAD Jan 21 06:19:53.060000 audit[3604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.061000 audit: BPF prog-id=165 op=LOAD Jan 21 06:19:53.061000 audit[3604]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3449 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:53.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363396531653566356564323139643666633139303362633434316333 Jan 21 06:19:53.243597 containerd[1588]: time="2026-01-21T06:19:53.243385693Z" level=info msg="StartContainer for \"3c9e1e5f5ed219d6fc1903bc441c3113c62a36b90f7f0a83c6ad9dee642caf61\" returns successfully" Jan 21 06:19:53.732861 kubelet[2998]: E0121 06:19:53.730611 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:53.753184 kubelet[2998]: E0121 06:19:53.752608 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.753408 kubelet[2998]: W0121 06:19:53.753279 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.753408 kubelet[2998]: E0121 06:19:53.753306 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.756057 kubelet[2998]: E0121 06:19:53.755524 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.756057 kubelet[2998]: W0121 06:19:53.755539 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.756057 kubelet[2998]: E0121 06:19:53.755556 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.758092 kubelet[2998]: E0121 06:19:53.757842 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.758092 kubelet[2998]: W0121 06:19:53.757858 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.758092 kubelet[2998]: E0121 06:19:53.757881 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.759750 kubelet[2998]: E0121 06:19:53.758841 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.759750 kubelet[2998]: W0121 06:19:53.758859 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.759750 kubelet[2998]: E0121 06:19:53.758880 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.761945 kubelet[2998]: E0121 06:19:53.761913 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.761945 kubelet[2998]: W0121 06:19:53.761932 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.762051 kubelet[2998]: E0121 06:19:53.761948 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.762834 kubelet[2998]: E0121 06:19:53.762360 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.762834 kubelet[2998]: W0121 06:19:53.762437 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.762834 kubelet[2998]: E0121 06:19:53.762453 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.764838 kubelet[2998]: E0121 06:19:53.764810 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.764838 kubelet[2998]: W0121 06:19:53.764827 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.765024 kubelet[2998]: E0121 06:19:53.764844 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.766243 kubelet[2998]: E0121 06:19:53.766035 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.766243 kubelet[2998]: W0121 06:19:53.766048 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.766243 kubelet[2998]: E0121 06:19:53.766058 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.768807 kubelet[2998]: E0121 06:19:53.768466 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.768807 kubelet[2998]: W0121 06:19:53.768529 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.768807 kubelet[2998]: E0121 06:19:53.768540 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.771192 kubelet[2998]: E0121 06:19:53.770817 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.771192 kubelet[2998]: W0121 06:19:53.770839 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.771192 kubelet[2998]: E0121 06:19:53.770849 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.772210 kubelet[2998]: E0121 06:19:53.772094 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.772210 kubelet[2998]: W0121 06:19:53.772103 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.772210 kubelet[2998]: E0121 06:19:53.772208 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.773334 kubelet[2998]: E0121 06:19:53.773085 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.773334 kubelet[2998]: W0121 06:19:53.773096 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.773334 kubelet[2998]: E0121 06:19:53.773185 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.774973 kubelet[2998]: E0121 06:19:53.774884 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.774973 kubelet[2998]: W0121 06:19:53.774919 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.774973 kubelet[2998]: E0121 06:19:53.774949 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.776846 kubelet[2998]: E0121 06:19:53.776800 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.776846 kubelet[2998]: W0121 06:19:53.776828 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.776846 kubelet[2998]: E0121 06:19:53.776853 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.784410 kubelet[2998]: E0121 06:19:53.780834 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.784410 kubelet[2998]: W0121 06:19:53.780854 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.784410 kubelet[2998]: E0121 06:19:53.780871 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.784410 kubelet[2998]: E0121 06:19:53.783873 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.784410 kubelet[2998]: W0121 06:19:53.783893 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.784410 kubelet[2998]: E0121 06:19:53.783914 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.789536 kubelet[2998]: E0121 06:19:53.789274 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.789536 kubelet[2998]: W0121 06:19:53.789302 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.789536 kubelet[2998]: E0121 06:19:53.789324 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.793752 kubelet[2998]: E0121 06:19:53.792346 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.793752 kubelet[2998]: W0121 06:19:53.792370 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.793752 kubelet[2998]: E0121 06:19:53.792391 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.801755 kubelet[2998]: E0121 06:19:53.799838 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.801755 kubelet[2998]: W0121 06:19:53.799864 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.801755 kubelet[2998]: E0121 06:19:53.799886 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.811465 kubelet[2998]: E0121 06:19:53.811035 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.811465 kubelet[2998]: W0121 06:19:53.811219 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.811465 kubelet[2998]: E0121 06:19:53.811262 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.815920 kubelet[2998]: E0121 06:19:53.815838 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.815920 kubelet[2998]: W0121 06:19:53.815867 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.815920 kubelet[2998]: E0121 06:19:53.815890 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.822066 kubelet[2998]: E0121 06:19:53.821838 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.822066 kubelet[2998]: W0121 06:19:53.821862 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.822066 kubelet[2998]: E0121 06:19:53.821883 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.824552 kubelet[2998]: E0121 06:19:53.824361 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.824552 kubelet[2998]: W0121 06:19:53.824384 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.824552 kubelet[2998]: E0121 06:19:53.824404 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.827014 kubelet[2998]: E0121 06:19:53.826897 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.827014 kubelet[2998]: W0121 06:19:53.826993 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.827014 kubelet[2998]: E0121 06:19:53.827013 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.848915 kubelet[2998]: E0121 06:19:53.848405 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.848915 kubelet[2998]: W0121 06:19:53.848506 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.848915 kubelet[2998]: E0121 06:19:53.848545 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.852885 kubelet[2998]: E0121 06:19:53.851872 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.852885 kubelet[2998]: W0121 06:19:53.851918 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.852885 kubelet[2998]: E0121 06:19:53.851947 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.854305 kubelet[2998]: E0121 06:19:53.854240 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.854305 kubelet[2998]: W0121 06:19:53.854265 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.854305 kubelet[2998]: E0121 06:19:53.854287 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.858436 kubelet[2998]: E0121 06:19:53.858312 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.858436 kubelet[2998]: W0121 06:19:53.858340 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.858436 kubelet[2998]: E0121 06:19:53.858361 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.864023 kubelet[2998]: E0121 06:19:53.863246 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.864023 kubelet[2998]: W0121 06:19:53.863348 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.864023 kubelet[2998]: E0121 06:19:53.863373 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.867509 kubelet[2998]: E0121 06:19:53.867051 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.867509 kubelet[2998]: W0121 06:19:53.867073 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.867509 kubelet[2998]: E0121 06:19:53.867091 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.869585 kubelet[2998]: E0121 06:19:53.869276 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.869585 kubelet[2998]: W0121 06:19:53.869355 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.869585 kubelet[2998]: E0121 06:19:53.869377 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.870564 kubelet[2998]: E0121 06:19:53.870484 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.870564 kubelet[2998]: W0121 06:19:53.870561 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.870803 kubelet[2998]: E0121 06:19:53.870578 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.872300 kubelet[2998]: E0121 06:19:53.872209 2998 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 06:19:53.872300 kubelet[2998]: W0121 06:19:53.872293 2998 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 06:19:53.872381 kubelet[2998]: E0121 06:19:53.872309 2998 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 06:19:53.924033 containerd[1588]: time="2026-01-21T06:19:53.923957281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:53.927853 containerd[1588]: time="2026-01-21T06:19:53.927549018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 21 06:19:53.931281 containerd[1588]: time="2026-01-21T06:19:53.931032094Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:53.944812 containerd[1588]: time="2026-01-21T06:19:53.944507347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:19:53.945427 containerd[1588]: time="2026-01-21T06:19:53.945396586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.180073606s" Jan 21 06:19:53.945540 containerd[1588]: time="2026-01-21T06:19:53.945520124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 21 06:19:53.957374 containerd[1588]: time="2026-01-21T06:19:53.957062877Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 06:19:53.990950 containerd[1588]: time="2026-01-21T06:19:53.988616372Z" level=info msg="Container 8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:19:54.017899 containerd[1588]: time="2026-01-21T06:19:54.017828530Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140\"" Jan 21 06:19:54.020953 containerd[1588]: time="2026-01-21T06:19:54.020045604Z" level=info msg="StartContainer for \"8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140\"" Jan 21 06:19:54.021875 containerd[1588]: time="2026-01-21T06:19:54.021577915Z" level=info msg="connecting to shim 8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140" address="unix:///run/containerd/s/25a7b4f22c4b09614481c147da64642e367b577ea3c3a369c1aaadb752a763ca" protocol=ttrpc version=3 Jan 21 06:19:54.111296 systemd[1]: Started cri-containerd-8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140.scope - libcontainer container 8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140. Jan 21 06:19:54.297852 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 21 06:19:54.297995 kernel: audit: type=1334 audit(1768976394.281:582): prog-id=166 op=LOAD Jan 21 06:19:54.281000 audit: BPF prog-id=166 op=LOAD Jan 21 06:19:54.299815 kernel: audit: type=1300 audit(1768976394.281:582): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.281000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.327876 kernel: audit: type=1327 audit(1768976394.281:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.283000 audit: BPF prog-id=167 op=LOAD Jan 21 06:19:54.283000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.397930 kernel: audit: type=1334 audit(1768976394.283:583): prog-id=167 op=LOAD Jan 21 06:19:54.398059 kernel: audit: type=1300 audit(1768976394.283:583): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.398100 kernel: audit: type=1327 audit(1768976394.283:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.432906 kernel: audit: type=1334 audit(1768976394.283:584): prog-id=167 op=UNLOAD Jan 21 06:19:54.283000 audit: BPF prog-id=167 op=UNLOAD Jan 21 06:19:54.461854 kernel: audit: type=1300 audit(1768976394.283:584): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.283000 audit[3682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.485857 kernel: audit: type=1327 audit(1768976394.283:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.485940 kernel: audit: type=1334 audit(1768976394.283:585): prog-id=166 op=UNLOAD Jan 21 06:19:54.283000 audit: BPF prog-id=166 op=UNLOAD Jan 21 06:19:54.283000 audit[3682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.283000 audit: BPF prog-id=168 op=LOAD Jan 21 06:19:54.283000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=3536 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:19:54.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836393865383935633635343239626532353635373737666238666636 Jan 21 06:19:54.506943 kubelet[2998]: E0121 06:19:54.506798 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:19:54.537281 containerd[1588]: time="2026-01-21T06:19:54.534813647Z" level=info msg="StartContainer for \"8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140\" returns successfully" Jan 21 06:19:54.618239 systemd[1]: cri-containerd-8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140.scope: Deactivated successfully. Jan 21 06:19:54.618968 systemd[1]: cri-containerd-8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140.scope: Consumed 162ms CPU time, 6.4M memory peak, 4.1M written to disk. Jan 21 06:19:54.623000 audit: BPF prog-id=168 op=UNLOAD Jan 21 06:19:54.626888 containerd[1588]: time="2026-01-21T06:19:54.626598379Z" level=info msg="received container exit event container_id:\"8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140\" id:\"8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140\" pid:3695 exited_at:{seconds:1768976394 nanos:625310891}" Jan 21 06:19:54.725095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8698e895c65429be2565777fb8ff6c385d5dcd5d8a205f29691fc3f026e62140-rootfs.mount: Deactivated successfully. Jan 21 06:19:54.747479 kubelet[2998]: I0121 06:19:54.747435 2998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:19:54.766031 kubelet[2998]: E0121 06:19:54.748036 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:54.766031 kubelet[2998]: E0121 06:19:54.749552 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:54.820972 kubelet[2998]: I0121 06:19:54.817553 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b9c6c4f48-kzbws" podStartSLOduration=3.548023138 podStartE2EDuration="6.817530723s" podCreationTimestamp="2026-01-21 06:19:48 +0000 UTC" firstStartedPulling="2026-01-21 06:19:49.493766119 +0000 UTC m=+29.214672680" lastFinishedPulling="2026-01-21 06:19:52.763273694 +0000 UTC m=+32.484180265" observedRunningTime="2026-01-21 06:19:53.807553263 +0000 UTC m=+33.528459823" watchObservedRunningTime="2026-01-21 06:19:54.817530723 +0000 UTC m=+34.538437294" Jan 21 06:19:55.756510 kubelet[2998]: E0121 06:19:55.756415 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:19:55.757832 containerd[1588]: time="2026-01-21T06:19:55.757422293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 06:19:56.502223 kubelet[2998]: E0121 06:19:56.502174 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:19:58.503968 kubelet[2998]: E0121 06:19:58.503045 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:00.503224 kubelet[2998]: E0121 06:20:00.503042 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:01.041476 containerd[1588]: time="2026-01-21T06:20:01.041201560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:01.044579 containerd[1588]: time="2026-01-21T06:20:01.044219190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 21 06:20:01.047014 containerd[1588]: time="2026-01-21T06:20:01.046871435Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:01.056246 containerd[1588]: time="2026-01-21T06:20:01.056061452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:01.057448 containerd[1588]: time="2026-01-21T06:20:01.057163287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.299629248s" Jan 21 06:20:01.057448 containerd[1588]: time="2026-01-21T06:20:01.057195407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 21 06:20:01.074365 containerd[1588]: time="2026-01-21T06:20:01.074263318Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 06:20:01.105166 containerd[1588]: time="2026-01-21T06:20:01.104927711Z" level=info msg="Container 97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:20:01.134564 containerd[1588]: time="2026-01-21T06:20:01.134365692Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2\"" Jan 21 06:20:01.136803 containerd[1588]: time="2026-01-21T06:20:01.135995749Z" level=info msg="StartContainer for \"97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2\"" Jan 21 06:20:01.138336 containerd[1588]: time="2026-01-21T06:20:01.138239993Z" level=info msg="connecting to shim 97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2" address="unix:///run/containerd/s/25a7b4f22c4b09614481c147da64642e367b577ea3c3a369c1aaadb752a763ca" protocol=ttrpc version=3 Jan 21 06:20:01.206257 systemd[1]: Started cri-containerd-97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2.scope - libcontainer container 97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2. Jan 21 06:20:01.334000 audit: BPF prog-id=169 op=LOAD Jan 21 06:20:01.342506 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 06:20:01.342852 kernel: audit: type=1334 audit(1768976401.334:588): prog-id=169 op=LOAD Jan 21 06:20:01.334000 audit[3748]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.405550 kernel: audit: type=1300 audit(1768976401.334:588): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.405872 kernel: audit: type=1327 audit(1768976401.334:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.405920 kernel: audit: type=1334 audit(1768976401.334:589): prog-id=170 op=LOAD Jan 21 06:20:01.334000 audit: BPF prog-id=170 op=LOAD Jan 21 06:20:01.334000 audit[3748]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.441223 kernel: audit: type=1300 audit(1768976401.334:589): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.335000 audit: BPF prog-id=170 op=UNLOAD Jan 21 06:20:01.475356 kernel: audit: type=1327 audit(1768976401.334:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.475607 kernel: audit: type=1334 audit(1768976401.335:590): prog-id=170 op=UNLOAD Jan 21 06:20:01.475800 kernel: audit: type=1300 audit(1768976401.335:590): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.335000 audit[3748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.500967 kernel: audit: type=1327 audit(1768976401.335:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.335000 audit: BPF prog-id=169 op=UNLOAD Jan 21 06:20:01.335000 audit[3748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.534868 kernel: audit: type=1334 audit(1768976401.335:591): prog-id=169 op=UNLOAD Jan 21 06:20:01.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.335000 audit: BPF prog-id=171 op=LOAD Jan 21 06:20:01.335000 audit[3748]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3536 pid=3748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:01.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937636664373131373730663539353363313939656239663937383061 Jan 21 06:20:01.550348 containerd[1588]: time="2026-01-21T06:20:01.548367489Z" level=info msg="StartContainer for \"97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2\" returns successfully" Jan 21 06:20:01.805748 kubelet[2998]: E0121 06:20:01.802792 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:02.411572 kubelet[2998]: I0121 06:20:02.411494 2998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:20:02.414573 kubelet[2998]: E0121 06:20:02.414331 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:02.504778 kubelet[2998]: E0121 06:20:02.504353 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:02.555000 audit[3781]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:02.555000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd38a7fe70 a2=0 a3=7ffd38a7fe5c items=0 ppid=3160 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:02.555000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:02.567000 audit[3781]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:02.567000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd38a7fe70 a2=0 a3=7ffd38a7fe5c items=0 ppid=3160 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:02.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:02.807200 kubelet[2998]: E0121 06:20:02.806961 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:02.810053 kubelet[2998]: E0121 06:20:02.809819 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:03.692397 systemd[1]: cri-containerd-97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2.scope: Deactivated successfully. Jan 21 06:20:03.694572 systemd[1]: cri-containerd-97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2.scope: Consumed 2.124s CPU time, 176.2M memory peak, 3.2M read from disk, 171.3M written to disk. Jan 21 06:20:03.698790 containerd[1588]: time="2026-01-21T06:20:03.698309419Z" level=info msg="received container exit event container_id:\"97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2\" id:\"97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2\" pid:3760 exited_at:{seconds:1768976403 nanos:694037205}" Jan 21 06:20:03.698000 audit: BPF prog-id=171 op=UNLOAD Jan 21 06:20:03.785517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97cfd711770f5953c199eb9f9780a962809b2b604cf5eaff41c6d95fce0c17a2-rootfs.mount: Deactivated successfully. Jan 21 06:20:03.873439 kubelet[2998]: I0121 06:20:03.872950 2998 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 06:20:04.053229 systemd[1]: Created slice kubepods-burstable-pod00cbc947_52ff_416d_bc74_328c0c5546b4.slice - libcontainer container kubepods-burstable-pod00cbc947_52ff_416d_bc74_328c0c5546b4.slice. Jan 21 06:20:04.074427 systemd[1]: Created slice kubepods-burstable-podd179681c_11cd_468d_ad87_dad9a234715d.slice - libcontainer container kubepods-burstable-podd179681c_11cd_468d_ad87_dad9a234715d.slice. Jan 21 06:20:04.104518 systemd[1]: Created slice kubepods-besteffort-pod44e1484f_18ef_43d7_8551_7c92cf1926c4.slice - libcontainer container kubepods-besteffort-pod44e1484f_18ef_43d7_8551_7c92cf1926c4.slice. Jan 21 06:20:04.119206 systemd[1]: Created slice kubepods-besteffort-pod6da7defa_596e_458c_83a7_a38c6a1d3cd4.slice - libcontainer container kubepods-besteffort-pod6da7defa_596e_458c_83a7_a38c6a1d3cd4.slice. Jan 21 06:20:04.136256 kubelet[2998]: I0121 06:20:04.135237 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-backend-key-pair\") pod \"whisker-794c58f5c4-nr82l\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " pod="calico-system/whisker-794c58f5c4-nr82l" Jan 21 06:20:04.140748 kubelet[2998]: I0121 06:20:04.140523 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-ca-bundle\") pod \"whisker-794c58f5c4-nr82l\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " pod="calico-system/whisker-794c58f5c4-nr82l" Jan 21 06:20:04.141253 kubelet[2998]: I0121 06:20:04.140777 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvl4c\" (UniqueName: \"kubernetes.io/projected/00cbc947-52ff-416d-bc74-328c0c5546b4-kube-api-access-zvl4c\") pod \"coredns-674b8bbfcf-z55s4\" (UID: \"00cbc947-52ff-416d-bc74-328c0c5546b4\") " pod="kube-system/coredns-674b8bbfcf-z55s4" Jan 21 06:20:04.141253 kubelet[2998]: I0121 06:20:04.141181 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d179681c-11cd-468d-ad87-dad9a234715d-config-volume\") pod \"coredns-674b8bbfcf-rqf7b\" (UID: \"d179681c-11cd-468d-ad87-dad9a234715d\") " pod="kube-system/coredns-674b8bbfcf-rqf7b" Jan 21 06:20:04.141253 kubelet[2998]: I0121 06:20:04.141246 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmhg\" (UniqueName: \"kubernetes.io/projected/6da7defa-596e-458c-83a7-a38c6a1d3cd4-kube-api-access-lzmhg\") pod \"whisker-794c58f5c4-nr82l\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " pod="calico-system/whisker-794c58f5c4-nr82l" Jan 21 06:20:04.141422 kubelet[2998]: I0121 06:20:04.141356 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjf2\" (UniqueName: \"kubernetes.io/projected/d179681c-11cd-468d-ad87-dad9a234715d-kube-api-access-djjf2\") pod \"coredns-674b8bbfcf-rqf7b\" (UID: \"d179681c-11cd-468d-ad87-dad9a234715d\") " pod="kube-system/coredns-674b8bbfcf-rqf7b" Jan 21 06:20:04.142879 kubelet[2998]: I0121 06:20:04.141835 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc947-52ff-416d-bc74-328c0c5546b4-config-volume\") pod \"coredns-674b8bbfcf-z55s4\" (UID: \"00cbc947-52ff-416d-bc74-328c0c5546b4\") " pod="kube-system/coredns-674b8bbfcf-z55s4" Jan 21 06:20:04.142879 kubelet[2998]: I0121 06:20:04.142259 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5pbf\" (UniqueName: \"kubernetes.io/projected/44e1484f-18ef-43d7-8551-7c92cf1926c4-kube-api-access-q5pbf\") pod \"calico-kube-controllers-797d998774-t5xkn\" (UID: \"44e1484f-18ef-43d7-8551-7c92cf1926c4\") " pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:04.142879 kubelet[2998]: I0121 06:20:04.142478 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44e1484f-18ef-43d7-8551-7c92cf1926c4-tigera-ca-bundle\") pod \"calico-kube-controllers-797d998774-t5xkn\" (UID: \"44e1484f-18ef-43d7-8551-7c92cf1926c4\") " pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:04.155398 systemd[1]: Created slice kubepods-besteffort-pod18fcd4d3_26de_4ac6_99a6_06a703ea7790.slice - libcontainer container kubepods-besteffort-pod18fcd4d3_26de_4ac6_99a6_06a703ea7790.slice. Jan 21 06:20:04.174329 systemd[1]: Created slice kubepods-besteffort-pod0928ac10_29ff_4619_8155_c160108ee532.slice - libcontainer container kubepods-besteffort-pod0928ac10_29ff_4619_8155_c160108ee532.slice. Jan 21 06:20:04.200927 systemd[1]: Created slice kubepods-besteffort-podd06b2fe8_bce2_4b8f_842a_8da146f1a644.slice - libcontainer container kubepods-besteffort-podd06b2fe8_bce2_4b8f_842a_8da146f1a644.slice. Jan 21 06:20:04.243356 kubelet[2998]: I0121 06:20:04.243255 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnn8\" (UniqueName: \"kubernetes.io/projected/0928ac10-29ff-4619-8155-c160108ee532-kube-api-access-znnn8\") pod \"calico-apiserver-76f4489f98-lvqcb\" (UID: \"0928ac10-29ff-4619-8155-c160108ee532\") " pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:04.243356 kubelet[2998]: I0121 06:20:04.243356 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18fcd4d3-26de-4ac6-99a6-06a703ea7790-config\") pod \"goldmane-666569f655-9p9f8\" (UID: \"18fcd4d3-26de-4ac6-99a6-06a703ea7790\") " pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:04.243789 kubelet[2998]: I0121 06:20:04.243384 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18fcd4d3-26de-4ac6-99a6-06a703ea7790-goldmane-ca-bundle\") pod \"goldmane-666569f655-9p9f8\" (UID: \"18fcd4d3-26de-4ac6-99a6-06a703ea7790\") " pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:04.243789 kubelet[2998]: I0121 06:20:04.243463 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/18fcd4d3-26de-4ac6-99a6-06a703ea7790-goldmane-key-pair\") pod \"goldmane-666569f655-9p9f8\" (UID: \"18fcd4d3-26de-4ac6-99a6-06a703ea7790\") " pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:04.243789 kubelet[2998]: I0121 06:20:04.243487 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvz4\" (UniqueName: \"kubernetes.io/projected/d06b2fe8-bce2-4b8f-842a-8da146f1a644-kube-api-access-5gvz4\") pod \"calico-apiserver-76f4489f98-89ljm\" (UID: \"d06b2fe8-bce2-4b8f-842a-8da146f1a644\") " pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:04.243789 kubelet[2998]: I0121 06:20:04.243506 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0928ac10-29ff-4619-8155-c160108ee532-calico-apiserver-certs\") pod \"calico-apiserver-76f4489f98-lvqcb\" (UID: \"0928ac10-29ff-4619-8155-c160108ee532\") " pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:04.243789 kubelet[2998]: I0121 06:20:04.243549 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br69\" (UniqueName: \"kubernetes.io/projected/18fcd4d3-26de-4ac6-99a6-06a703ea7790-kube-api-access-4br69\") pod \"goldmane-666569f655-9p9f8\" (UID: \"18fcd4d3-26de-4ac6-99a6-06a703ea7790\") " pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:04.243919 kubelet[2998]: I0121 06:20:04.243569 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d06b2fe8-bce2-4b8f-842a-8da146f1a644-calico-apiserver-certs\") pod \"calico-apiserver-76f4489f98-89ljm\" (UID: \"d06b2fe8-bce2-4b8f-842a-8da146f1a644\") " pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:04.373042 kubelet[2998]: E0121 06:20:04.369197 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:04.386333 containerd[1588]: time="2026-01-21T06:20:04.385613611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z55s4,Uid:00cbc947-52ff-416d-bc74-328c0c5546b4,Namespace:kube-system,Attempt:0,}" Jan 21 06:20:04.400031 kubelet[2998]: E0121 06:20:04.397816 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:04.400288 containerd[1588]: time="2026-01-21T06:20:04.398922074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqf7b,Uid:d179681c-11cd-468d-ad87-dad9a234715d,Namespace:kube-system,Attempt:0,}" Jan 21 06:20:04.436245 containerd[1588]: time="2026-01-21T06:20:04.436192274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794c58f5c4-nr82l,Uid:6da7defa-596e-458c-83a7-a38c6a1d3cd4,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:04.438016 containerd[1588]: time="2026-01-21T06:20:04.436248771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:04.472302 containerd[1588]: time="2026-01-21T06:20:04.472253916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:04.485251 containerd[1588]: time="2026-01-21T06:20:04.485204976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:04.520848 containerd[1588]: time="2026-01-21T06:20:04.520808812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:04.527203 systemd[1]: Created slice kubepods-besteffort-pod219deac5_c979_42b1_a796_a0c185470d95.slice - libcontainer container kubepods-besteffort-pod219deac5_c979_42b1_a796_a0c185470d95.slice. Jan 21 06:20:04.549888 containerd[1588]: time="2026-01-21T06:20:04.549847173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4vl7,Uid:219deac5-c979-42b1-a796-a0c185470d95,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:05.034189 kubelet[2998]: E0121 06:20:05.033843 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:05.066836 containerd[1588]: time="2026-01-21T06:20:05.053890111Z" level=error msg="Failed to destroy network for sandbox \"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.062545 systemd[1]: run-netns-cni\x2dd6d880c9\x2d8c30\x2da5a3\x2d1a66\x2d66d4205f871a.mount: Deactivated successfully. Jan 21 06:20:05.071236 containerd[1588]: time="2026-01-21T06:20:05.070159063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 06:20:05.111344 containerd[1588]: time="2026-01-21T06:20:05.111150216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z55s4,Uid:00cbc947-52ff-416d-bc74-328c0c5546b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.112456 kubelet[2998]: E0121 06:20:05.111608 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.112456 kubelet[2998]: E0121 06:20:05.111995 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z55s4" Jan 21 06:20:05.112456 kubelet[2998]: E0121 06:20:05.112025 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z55s4" Jan 21 06:20:05.112810 kubelet[2998]: E0121 06:20:05.112172 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z55s4_kube-system(00cbc947-52ff-416d-bc74-328c0c5546b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z55s4_kube-system(00cbc947-52ff-416d-bc74-328c0c5546b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07725514d82fa35641360953da2e32a8d749fe549e746501860b1dbccebc2265\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z55s4" podUID="00cbc947-52ff-416d-bc74-328c0c5546b4" Jan 21 06:20:05.185815 containerd[1588]: time="2026-01-21T06:20:05.185371580Z" level=error msg="Failed to destroy network for sandbox \"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.190577 systemd[1]: run-netns-cni\x2daddf43c0\x2da591\x2d11f2\x2d8382\x2daf70ac0d1381.mount: Deactivated successfully. Jan 21 06:20:05.210161 containerd[1588]: time="2026-01-21T06:20:05.209508277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.210778 kubelet[2998]: E0121 06:20:05.210265 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.210778 kubelet[2998]: E0121 06:20:05.210325 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:05.210778 kubelet[2998]: E0121 06:20:05.210354 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:05.210926 kubelet[2998]: E0121 06:20:05.210426 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daf12a1ed4fcd08600cd3d66ab955a3ef8d4075c5d66e8ac678821e6938c5e84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:05.260030 containerd[1588]: time="2026-01-21T06:20:05.259956862Z" level=error msg="Failed to destroy network for sandbox \"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.264902 systemd[1]: run-netns-cni\x2ddf7cf238\x2d8e18\x2d32c1\x2d9351\x2de12c33fda41c.mount: Deactivated successfully. Jan 21 06:20:05.279908 containerd[1588]: time="2026-01-21T06:20:05.279536088Z" level=error msg="Failed to destroy network for sandbox \"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.281350 containerd[1588]: time="2026-01-21T06:20:05.281018320Z" level=error msg="Failed to destroy network for sandbox \"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.291865 containerd[1588]: time="2026-01-21T06:20:05.287769884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqf7b,Uid:d179681c-11cd-468d-ad87-dad9a234715d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.289564 systemd[1]: run-netns-cni\x2d8b91c693\x2d7254\x2d03ad\x2dbc48\x2d4d5e6962f798.mount: Deactivated successfully. Jan 21 06:20:05.289866 systemd[1]: run-netns-cni\x2d8203e29d\x2db02a\x2df481\x2d8671\x2d67e6b6dc473d.mount: Deactivated successfully. Jan 21 06:20:05.294397 kubelet[2998]: E0121 06:20:05.293806 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.294397 kubelet[2998]: E0121 06:20:05.293875 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqf7b" Jan 21 06:20:05.294397 kubelet[2998]: E0121 06:20:05.293920 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rqf7b" Jan 21 06:20:05.294561 kubelet[2998]: E0121 06:20:05.293977 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rqf7b_kube-system(d179681c-11cd-468d-ad87-dad9a234715d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rqf7b_kube-system(d179681c-11cd-468d-ad87-dad9a234715d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fee275fc8f357f4cb0881f6904c9185f55474bd81307d06840197359fa847ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rqf7b" podUID="d179681c-11cd-468d-ad87-dad9a234715d" Jan 21 06:20:05.302464 containerd[1588]: time="2026-01-21T06:20:05.302419571Z" level=error msg="Failed to destroy network for sandbox \"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.308578 containerd[1588]: time="2026-01-21T06:20:05.308539332Z" level=error msg="Failed to destroy network for sandbox \"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.312538 containerd[1588]: time="2026-01-21T06:20:05.312501190Z" level=error msg="Failed to destroy network for sandbox \"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.329424 containerd[1588]: time="2026-01-21T06:20:05.329288214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4vl7,Uid:219deac5-c979-42b1-a796-a0c185470d95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.330329 kubelet[2998]: E0121 06:20:05.330252 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.330329 kubelet[2998]: E0121 06:20:05.330330 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:20:05.330329 kubelet[2998]: E0121 06:20:05.330360 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4vl7" Jan 21 06:20:05.331335 kubelet[2998]: E0121 06:20:05.331023 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b05656721344f347175dad938a5445958936249485742b1dbc3877f5a788f37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:05.337693 containerd[1588]: time="2026-01-21T06:20:05.337530565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.340466 kubelet[2998]: E0121 06:20:05.339936 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.340466 kubelet[2998]: E0121 06:20:05.340000 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:05.340466 kubelet[2998]: E0121 06:20:05.340026 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:05.342010 kubelet[2998]: E0121 06:20:05.340189 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"970dd1db3147c707a64dc65c2dac65dead8d37f5c1afd33e4123d8dcada5b5a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:05.342285 containerd[1588]: time="2026-01-21T06:20:05.341793283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794c58f5c4-nr82l,Uid:6da7defa-596e-458c-83a7-a38c6a1d3cd4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.343172 kubelet[2998]: E0121 06:20:05.343039 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.343858 kubelet[2998]: E0121 06:20:05.343787 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794c58f5c4-nr82l" Jan 21 06:20:05.343858 kubelet[2998]: E0121 06:20:05.343819 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794c58f5c4-nr82l" Jan 21 06:20:05.344324 kubelet[2998]: E0121 06:20:05.344282 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-794c58f5c4-nr82l_calico-system(6da7defa-596e-458c-83a7-a38c6a1d3cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-794c58f5c4-nr82l_calico-system(6da7defa-596e-458c-83a7-a38c6a1d3cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"242303f297ef09d1dc9dc5b14101006cdf41641680762f596963e1dfcba387b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-794c58f5c4-nr82l" podUID="6da7defa-596e-458c-83a7-a38c6a1d3cd4" Jan 21 06:20:05.353287 containerd[1588]: time="2026-01-21T06:20:05.352994896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.355994 kubelet[2998]: E0121 06:20:05.355949 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.357305 kubelet[2998]: E0121 06:20:05.357012 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:05.357305 kubelet[2998]: E0121 06:20:05.357155 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:05.357305 kubelet[2998]: E0121 06:20:05.357235 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a20bb5409ac9863653e9291b58157a48653eefcf5be53f2eb9cc08a91e0255da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:05.358496 containerd[1588]: time="2026-01-21T06:20:05.358319383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.359286 kubelet[2998]: E0121 06:20:05.358915 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:05.359286 kubelet[2998]: E0121 06:20:05.358961 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:05.359286 kubelet[2998]: E0121 06:20:05.358984 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:05.359417 kubelet[2998]: E0121 06:20:05.359037 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24c999d0c1d165b292f0673de20c7e6cbaf545a938e89dab80f1d18fb8b65423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:05.780815 systemd[1]: run-netns-cni\x2da2e36064\x2d36d4\x2d7668\x2d0309\x2d6354821315c4.mount: Deactivated successfully. Jan 21 06:20:05.781018 systemd[1]: run-netns-cni\x2d27dc6ff7\x2d32a9\x2db7b0\x2d5d73\x2d767ec884e97e.mount: Deactivated successfully. Jan 21 06:20:05.782474 systemd[1]: run-netns-cni\x2d3358b41f\x2dde3e\x2d329d\x2d9e76\x2d5e95ce6bb801.mount: Deactivated successfully. Jan 21 06:20:15.508514 containerd[1588]: time="2026-01-21T06:20:15.508175940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:15.759344 containerd[1588]: time="2026-01-21T06:20:15.758954791Z" level=error msg="Failed to destroy network for sandbox \"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:15.762572 systemd[1]: run-netns-cni\x2d4afe2af9\x2d7849\x2d1d3d\x2d0861\x2d31652af15a3e.mount: Deactivated successfully. Jan 21 06:20:15.772316 containerd[1588]: time="2026-01-21T06:20:15.772203046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:15.773206 kubelet[2998]: E0121 06:20:15.772967 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:15.773206 kubelet[2998]: E0121 06:20:15.773042 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:15.773206 kubelet[2998]: E0121 06:20:15.773151 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" Jan 21 06:20:15.774482 kubelet[2998]: E0121 06:20:15.774383 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"759ad384adea1a1a81017ab947beb692773eb024e7e9dcc54f5647905a64b4aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:16.352471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258975586.mount: Deactivated successfully. Jan 21 06:20:16.482382 containerd[1588]: time="2026-01-21T06:20:16.480378588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 21 06:20:16.490546 containerd[1588]: time="2026-01-21T06:20:16.490498574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.420292301s" Jan 21 06:20:16.491209 containerd[1588]: time="2026-01-21T06:20:16.490906391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 21 06:20:16.497595 containerd[1588]: time="2026-01-21T06:20:16.497307460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:16.503212 containerd[1588]: time="2026-01-21T06:20:16.503179283Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:16.504888 containerd[1588]: time="2026-01-21T06:20:16.504858204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 06:20:16.517993 containerd[1588]: time="2026-01-21T06:20:16.517948226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:16.530906 containerd[1588]: time="2026-01-21T06:20:16.530602405Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 06:20:16.582872 containerd[1588]: time="2026-01-21T06:20:16.581799433Z" level=info msg="Container 1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:20:16.663938 containerd[1588]: time="2026-01-21T06:20:16.662223353Z" level=info msg="CreateContainer within sandbox \"e803f6650cc4963d65d31ac74f31a5f275e4fa14a07a943d0b3eae44cc504242\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1\"" Jan 21 06:20:16.665360 containerd[1588]: time="2026-01-21T06:20:16.665224269Z" level=info msg="StartContainer for \"1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1\"" Jan 21 06:20:16.668223 containerd[1588]: time="2026-01-21T06:20:16.668182417Z" level=info msg="connecting to shim 1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1" address="unix:///run/containerd/s/25a7b4f22c4b09614481c147da64642e367b577ea3c3a369c1aaadb752a763ca" protocol=ttrpc version=3 Jan 21 06:20:16.795151 containerd[1588]: time="2026-01-21T06:20:16.795007922Z" level=error msg="Failed to destroy network for sandbox \"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:16.803408 systemd[1]: run-netns-cni\x2de1156a02\x2d29cc\x2df8a6\x2d6ed1\x2d99311b6c8381.mount: Deactivated successfully. Jan 21 06:20:16.822003 containerd[1588]: time="2026-01-21T06:20:16.821524465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:16.823863 kubelet[2998]: E0121 06:20:16.823204 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:16.823863 kubelet[2998]: E0121 06:20:16.823266 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:16.823863 kubelet[2998]: E0121 06:20:16.823294 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" Jan 21 06:20:16.824472 kubelet[2998]: E0121 06:20:16.823355 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5253a0c5f2c4d382136e1cd918c5b1ba2a975b46d864645d12aa2723eae0d61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:16.844427 systemd[1]: Started cri-containerd-1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1.scope - libcontainer container 1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1. Jan 21 06:20:17.025000 audit: BPF prog-id=172 op=LOAD Jan 21 06:20:17.031911 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 21 06:20:17.032029 kernel: audit: type=1334 audit(1768976417.025:596): prog-id=172 op=LOAD Jan 21 06:20:17.025000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.063972 kernel: audit: type=1300 audit(1768976417.025:596): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.090038 kernel: audit: type=1327 audit(1768976417.025:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.026000 audit: BPF prog-id=173 op=LOAD Jan 21 06:20:17.098818 kernel: audit: type=1334 audit(1768976417.026:597): prog-id=173 op=LOAD Jan 21 06:20:17.098907 kernel: audit: type=1300 audit(1768976417.026:597): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.026000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.156461 kernel: audit: type=1327 audit(1768976417.026:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.026000 audit: BPF prog-id=173 op=UNLOAD Jan 21 06:20:17.164271 kernel: audit: type=1334 audit(1768976417.026:598): prog-id=173 op=UNLOAD Jan 21 06:20:17.026000 audit[4131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.234510 kernel: audit: type=1300 audit(1768976417.026:598): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.234787 kernel: audit: type=1327 audit(1768976417.026:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.026000 audit: BPF prog-id=172 op=UNLOAD Jan 21 06:20:17.243439 containerd[1588]: time="2026-01-21T06:20:17.242342157Z" level=info msg="StartContainer for \"1be0a6aab07eb860b9eb9b5a287875777461456cb8f49ac95ebd2e8e18bd05f1\" returns successfully" Jan 21 06:20:17.026000 audit[4131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.243800 kernel: audit: type=1334 audit(1768976417.026:599): prog-id=172 op=UNLOAD Jan 21 06:20:17.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.026000 audit: BPF prog-id=174 op=LOAD Jan 21 06:20:17.026000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3536 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:17.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653061366161623037656238363062396562396235613238373837 Jan 21 06:20:17.479234 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 06:20:17.479413 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 06:20:17.504493 containerd[1588]: time="2026-01-21T06:20:17.504316665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:17.504493 containerd[1588]: time="2026-01-21T06:20:17.504430966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:17.957331 kubelet[2998]: I0121 06:20:17.956771 2998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-backend-key-pair\") pod \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " Jan 21 06:20:17.963735 kubelet[2998]: I0121 06:20:17.963369 2998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzmhg\" (UniqueName: \"kubernetes.io/projected/6da7defa-596e-458c-83a7-a38c6a1d3cd4-kube-api-access-lzmhg\") pod \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " Jan 21 06:20:17.963735 kubelet[2998]: I0121 06:20:17.963476 2998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-ca-bundle\") pod \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\" (UID: \"6da7defa-596e-458c-83a7-a38c6a1d3cd4\") " Jan 21 06:20:17.966493 kubelet[2998]: I0121 06:20:17.966397 2998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6da7defa-596e-458c-83a7-a38c6a1d3cd4" (UID: "6da7defa-596e-458c-83a7-a38c6a1d3cd4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 06:20:17.986011 kubelet[2998]: I0121 06:20:17.983248 2998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6da7defa-596e-458c-83a7-a38c6a1d3cd4" (UID: "6da7defa-596e-458c-83a7-a38c6a1d3cd4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 06:20:17.987498 systemd[1]: var-lib-kubelet-pods-6da7defa\x2d596e\x2d458c\x2d83a7\x2da38c6a1d3cd4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 06:20:17.998425 systemd[1]: var-lib-kubelet-pods-6da7defa\x2d596e\x2d458c\x2d83a7\x2da38c6a1d3cd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzmhg.mount: Deactivated successfully. Jan 21 06:20:18.002591 kubelet[2998]: I0121 06:20:18.002391 2998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da7defa-596e-458c-83a7-a38c6a1d3cd4-kube-api-access-lzmhg" (OuterVolumeSpecName: "kube-api-access-lzmhg") pod "6da7defa-596e-458c-83a7-a38c6a1d3cd4" (UID: "6da7defa-596e-458c-83a7-a38c6a1d3cd4"). InnerVolumeSpecName "kube-api-access-lzmhg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 06:20:18.064950 kubelet[2998]: I0121 06:20:18.064435 2998 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 21 06:20:18.064950 kubelet[2998]: I0121 06:20:18.064865 2998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzmhg\" (UniqueName: \"kubernetes.io/projected/6da7defa-596e-458c-83a7-a38c6a1d3cd4-kube-api-access-lzmhg\") on node \"localhost\" DevicePath \"\"" Jan 21 06:20:18.064950 kubelet[2998]: I0121 06:20:18.064882 2998 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da7defa-596e-458c-83a7-a38c6a1d3cd4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 21 06:20:18.231796 kubelet[2998]: E0121 06:20:18.230915 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:18.255540 systemd[1]: Removed slice kubepods-besteffort-pod6da7defa_596e_458c_83a7_a38c6a1d3cd4.slice - libcontainer container kubepods-besteffort-pod6da7defa_596e_458c_83a7_a38c6a1d3cd4.slice. Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.981 [INFO][4210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.981 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" iface="eth0" netns="/var/run/netns/cni-6edf441f-8dcf-2d85-3af9-afd3d20aab62" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.981 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" iface="eth0" netns="/var/run/netns/cni-6edf441f-8dcf-2d85-3af9-afd3d20aab62" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.989 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" iface="eth0" netns="/var/run/netns/cni-6edf441f-8dcf-2d85-3af9-afd3d20aab62" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.989 [INFO][4210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:17.989 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:18.216 [INFO][4255] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" HandleID="k8s-pod-network.248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:18.217 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:18.294832 containerd[1588]: 2026-01-21 06:20:18.218 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:18.296756 containerd[1588]: 2026-01-21 06:20:18.234 [WARNING][4255] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" HandleID="k8s-pod-network.248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:18.296756 containerd[1588]: 2026-01-21 06:20:18.234 [INFO][4255] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" HandleID="k8s-pod-network.248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:18.296756 containerd[1588]: 2026-01-21 06:20:18.250 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:18.296756 containerd[1588]: 2026-01-21 06:20:18.281 [INFO][4210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95" Jan 21 06:20:18.295180 systemd[1]: run-netns-cni\x2d6edf441f\x2d8dcf\x2d2d85\x2d3af9\x2dafd3d20aab62.mount: Deactivated successfully. Jan 21 06:20:18.334478 containerd[1588]: time="2026-01-21T06:20:18.333270147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.940 [INFO][4228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.941 [INFO][4228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" iface="eth0" netns="/var/run/netns/cni-374fb593-f507-01fc-7b35-317de56b0506" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.942 [INFO][4228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" iface="eth0" netns="/var/run/netns/cni-374fb593-f507-01fc-7b35-317de56b0506" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.944 [INFO][4228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" iface="eth0" netns="/var/run/netns/cni-374fb593-f507-01fc-7b35-317de56b0506" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.944 [INFO][4228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:17.944 [INFO][4228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:18.216 [INFO][4247] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" HandleID="k8s-pod-network.53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:18.217 [INFO][4247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:18.369868 containerd[1588]: 2026-01-21 06:20:18.256 [INFO][4247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:18.370470 containerd[1588]: 2026-01-21 06:20:18.327 [WARNING][4247] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" HandleID="k8s-pod-network.53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:18.370470 containerd[1588]: 2026-01-21 06:20:18.328 [INFO][4247] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" HandleID="k8s-pod-network.53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:18.370470 containerd[1588]: 2026-01-21 06:20:18.343 [INFO][4247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:18.370470 containerd[1588]: 2026-01-21 06:20:18.358 [INFO][4228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557" Jan 21 06:20:18.379977 systemd[1]: run-netns-cni\x2d374fb593\x2df507\x2d01fc\x2d7b35\x2d317de56b0506.mount: Deactivated successfully. Jan 21 06:20:18.383457 kubelet[2998]: E0121 06:20:18.381213 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:18.383457 kubelet[2998]: E0121 06:20:18.381268 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:18.383457 kubelet[2998]: E0121 06:20:18.381287 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9p9f8" Jan 21 06:20:18.383591 kubelet[2998]: E0121 06:20:18.381330 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"248d7bcb9fc5f4fc0424e72d378bdcf16fe2729a4dd9a1c77d7e1a2a75b8cd95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:18.384562 kubelet[2998]: I0121 06:20:18.383944 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bg4vn" podStartSLOduration=3.45667522 podStartE2EDuration="30.383929928s" podCreationTimestamp="2026-01-21 06:19:48 +0000 UTC" firstStartedPulling="2026-01-21 06:19:49.566087001 +0000 UTC m=+29.286993562" lastFinishedPulling="2026-01-21 06:20:16.493341708 +0000 UTC m=+56.214248270" observedRunningTime="2026-01-21 06:20:18.302498467 +0000 UTC m=+58.023405029" watchObservedRunningTime="2026-01-21 06:20:18.383929928 +0000 UTC m=+58.104836489" Jan 21 06:20:18.392277 containerd[1588]: time="2026-01-21T06:20:18.392010215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:18.393209 kubelet[2998]: E0121 06:20:18.392327 2998 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 06:20:18.393209 kubelet[2998]: E0121 06:20:18.392367 2998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:18.393209 kubelet[2998]: E0121 06:20:18.392386 2998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" Jan 21 06:20:18.393375 kubelet[2998]: E0121 06:20:18.392429 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53418a07f027f5572c87bca71baa7a6d1923b80322567f324caa3ff0ecebc557\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:18.505472 kubelet[2998]: E0121 06:20:18.504861 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:18.506748 containerd[1588]: time="2026-01-21T06:20:18.506356168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z55s4,Uid:00cbc947-52ff-416d-bc74-328c0c5546b4,Namespace:kube-system,Attempt:0,}" Jan 21 06:20:18.512297 containerd[1588]: time="2026-01-21T06:20:18.512262467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4vl7,Uid:219deac5-c979-42b1-a796-a0c185470d95,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:18.519543 kubelet[2998]: I0121 06:20:18.519438 2998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da7defa-596e-458c-83a7-a38c6a1d3cd4" path="/var/lib/kubelet/pods/6da7defa-596e-458c-83a7-a38c6a1d3cd4/volumes" Jan 21 06:20:18.630393 systemd[1]: Created slice kubepods-besteffort-poddfd24090_6b99_4c4c_8800_9882cbbf99e5.slice - libcontainer container kubepods-besteffort-poddfd24090_6b99_4c4c_8800_9882cbbf99e5.slice. Jan 21 06:20:18.678217 kubelet[2998]: I0121 06:20:18.678038 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntxt\" (UniqueName: \"kubernetes.io/projected/dfd24090-6b99-4c4c-8800-9882cbbf99e5-kube-api-access-nntxt\") pod \"whisker-69d46b84b4-xb8qc\" (UID: \"dfd24090-6b99-4c4c-8800-9882cbbf99e5\") " pod="calico-system/whisker-69d46b84b4-xb8qc" Jan 21 06:20:18.678347 kubelet[2998]: I0121 06:20:18.678237 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfd24090-6b99-4c4c-8800-9882cbbf99e5-whisker-ca-bundle\") pod \"whisker-69d46b84b4-xb8qc\" (UID: \"dfd24090-6b99-4c4c-8800-9882cbbf99e5\") " pod="calico-system/whisker-69d46b84b4-xb8qc" Jan 21 06:20:18.678347 kubelet[2998]: I0121 06:20:18.678285 2998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dfd24090-6b99-4c4c-8800-9882cbbf99e5-whisker-backend-key-pair\") pod \"whisker-69d46b84b4-xb8qc\" (UID: \"dfd24090-6b99-4c4c-8800-9882cbbf99e5\") " pod="calico-system/whisker-69d46b84b4-xb8qc" Jan 21 06:20:18.948586 containerd[1588]: time="2026-01-21T06:20:18.947596337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d46b84b4-xb8qc,Uid:dfd24090-6b99-4c4c-8800-9882cbbf99e5,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:19.091295 systemd-networkd[1500]: cali0479b899e55: Link UP Jan 21 06:20:19.100152 systemd-networkd[1500]: cali0479b899e55: Gained carrier Jan 21 06:20:19.200271 containerd[1588]: 2026-01-21 06:20:18.613 [INFO][4281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:19.200271 containerd[1588]: 2026-01-21 06:20:18.663 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--z55s4-eth0 coredns-674b8bbfcf- kube-system 00cbc947-52ff-416d-bc74-328c0c5546b4 923 0 2026-01-21 06:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-z55s4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0479b899e55 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-" Jan 21 06:20:19.200271 containerd[1588]: 2026-01-21 06:20:18.664 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.200271 containerd[1588]: 2026-01-21 06:20:18.774 [INFO][4324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" HandleID="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Workload="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.775 [INFO][4324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" HandleID="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Workload="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125b00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-z55s4", "timestamp":"2026-01-21 06:20:18.774877321 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.775 [INFO][4324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.775 [INFO][4324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.775 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.810 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" host="localhost" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.862 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.887 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.897 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.928 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.201281 containerd[1588]: 2026-01-21 06:20:18.930 [INFO][4324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" host="localhost" Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:18.947 [INFO][4324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:18.966 [INFO][4324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" host="localhost" Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:19.020 [INFO][4324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" host="localhost" Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:19.020 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" host="localhost" Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:19.020 [INFO][4324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:19.201850 containerd[1588]: 2026-01-21 06:20:19.020 [INFO][4324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" HandleID="k8s-pod-network.fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Workload="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.201964 containerd[1588]: 2026-01-21 06:20:19.037 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z55s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"00cbc947-52ff-416d-bc74-328c0c5546b4", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-z55s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0479b899e55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.202172 containerd[1588]: 2026-01-21 06:20:19.038 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.202172 containerd[1588]: 2026-01-21 06:20:19.038 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0479b899e55 ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.202172 containerd[1588]: 2026-01-21 06:20:19.112 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.202257 containerd[1588]: 2026-01-21 06:20:19.121 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z55s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"00cbc947-52ff-416d-bc74-328c0c5546b4", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec", Pod:"coredns-674b8bbfcf-z55s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0479b899e55", MAC:"4e:a8:84:1c:bf:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.202257 containerd[1588]: 2026-01-21 06:20:19.195 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-z55s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z55s4-eth0" Jan 21 06:20:19.241195 containerd[1588]: time="2026-01-21T06:20:19.240919489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:19.243414 kubelet[2998]: E0121 06:20:19.243368 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:19.247566 containerd[1588]: time="2026-01-21T06:20:19.247428000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:19.327044 systemd-networkd[1500]: cali36a9007700f: Link UP Jan 21 06:20:19.341252 systemd-networkd[1500]: cali36a9007700f: Gained carrier Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.687 [INFO][4300] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.750 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w4vl7-eth0 csi-node-driver- calico-system 219deac5-c979-42b1-a796-a0c185470d95 792 0 2026-01-21 06:19:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w4vl7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali36a9007700f [] [] }} ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.750 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.959 [INFO][4337] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" HandleID="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Workload="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.959 [INFO][4337] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" HandleID="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Workload="localhost-k8s-csi--node--driver--w4vl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w4vl7", "timestamp":"2026-01-21 06:20:18.95914562 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:18.961 [INFO][4337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.022 [INFO][4337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.022 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.092 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.124 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.158 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.178 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.195 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.196 [INFO][4337] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.208 [INFO][4337] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.227 [INFO][4337] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.273 [INFO][4337] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.275 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" host="localhost" Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.275 [INFO][4337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:19.447132 containerd[1588]: 2026-01-21 06:20:19.276 [INFO][4337] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" HandleID="k8s-pod-network.b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Workload="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.312 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4vl7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"219deac5-c979-42b1-a796-a0c185470d95", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w4vl7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36a9007700f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.312 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.313 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36a9007700f ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.345 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.376 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4vl7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"219deac5-c979-42b1-a796-a0c185470d95", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e", Pod:"csi-node-driver-w4vl7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36a9007700f", MAC:"e2:0e:d3:e9:de:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.448914 containerd[1588]: 2026-01-21 06:20:19.415 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" Namespace="calico-system" Pod="csi-node-driver-w4vl7" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4vl7-eth0" Jan 21 06:20:19.521760 containerd[1588]: time="2026-01-21T06:20:19.520510133Z" level=info msg="connecting to shim fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec" address="unix:///run/containerd/s/38d0a868a623692efc87e1695d316e2e3898bf3b980bf78ddbc48b0fdb530bb3" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:19.580173 systemd-networkd[1500]: cali3835073e662: Link UP Jan 21 06:20:19.580518 systemd-networkd[1500]: cali3835073e662: Gained carrier Jan 21 06:20:19.585218 containerd[1588]: time="2026-01-21T06:20:19.585029367Z" level=info msg="connecting to shim b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e" address="unix:///run/containerd/s/e2eabf31325caf44121eb646c0d24bf5f2e9302ef0859243c40cc969937c9fea" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.138 [INFO][4350] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.181 [INFO][4350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69d46b84b4--xb8qc-eth0 whisker-69d46b84b4- calico-system dfd24090-6b99-4c4c-8800-9882cbbf99e5 1020 0 2026-01-21 06:20:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69d46b84b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69d46b84b4-xb8qc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3835073e662 [] [] }} ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.185 [INFO][4350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.333 [INFO][4371] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" HandleID="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Workload="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.333 [INFO][4371] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" HandleID="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Workload="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69d46b84b4-xb8qc", "timestamp":"2026-01-21 06:20:19.333189063 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.333 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.333 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.333 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.372 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.400 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.442 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.451 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.460 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.461 [INFO][4371] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.472 [INFO][4371] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.485 [INFO][4371] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.509 [INFO][4371] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.515 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" host="localhost" Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.518 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:19.646383 containerd[1588]: 2026-01-21 06:20:19.518 [INFO][4371] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" HandleID="k8s-pod-network.b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Workload="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.560 [INFO][4350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69d46b84b4--xb8qc-eth0", GenerateName:"whisker-69d46b84b4-", Namespace:"calico-system", SelfLink:"", UID:"dfd24090-6b99-4c4c-8800-9882cbbf99e5", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d46b84b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69d46b84b4-xb8qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3835073e662", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.560 [INFO][4350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.561 [INFO][4350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3835073e662 ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.583 [INFO][4350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.589 [INFO][4350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69d46b84b4--xb8qc-eth0", GenerateName:"whisker-69d46b84b4-", Namespace:"calico-system", SelfLink:"", UID:"dfd24090-6b99-4c4c-8800-9882cbbf99e5", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d46b84b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff", Pod:"whisker-69d46b84b4-xb8qc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3835073e662", MAC:"22:c5:fe:b0:bb:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:19.647271 containerd[1588]: 2026-01-21 06:20:19.634 [INFO][4350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" Namespace="calico-system" Pod="whisker-69d46b84b4-xb8qc" WorkloadEndpoint="localhost-k8s-whisker--69d46b84b4--xb8qc-eth0" Jan 21 06:20:19.770983 systemd[1]: Started cri-containerd-fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec.scope - libcontainer container fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec. Jan 21 06:20:19.856000 audit: BPF prog-id=175 op=LOAD Jan 21 06:20:19.857000 audit: BPF prog-id=176 op=LOAD Jan 21 06:20:19.857000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.857000 audit: BPF prog-id=176 op=UNLOAD Jan 21 06:20:19.857000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.861000 audit: BPF prog-id=177 op=LOAD Jan 21 06:20:19.861000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.861000 audit: BPF prog-id=178 op=LOAD Jan 21 06:20:19.861000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.861000 audit: BPF prog-id=178 op=UNLOAD Jan 21 06:20:19.861000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.861000 audit: BPF prog-id=177 op=UNLOAD Jan 21 06:20:19.861000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.861000 audit: BPF prog-id=179 op=LOAD Jan 21 06:20:19.861000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4445 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:19.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661623639366232366461383130613065616531376165366433663465 Jan 21 06:20:19.870847 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:19.885446 containerd[1588]: time="2026-01-21T06:20:19.884987710Z" level=info msg="connecting to shim b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff" address="unix:///run/containerd/s/12b8028a2741be6ddb15afd10fbda966540846ed6da3d5317d52043bea77af23" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:19.902583 systemd[1]: Started cri-containerd-b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e.scope - libcontainer container b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e. Jan 21 06:20:20.018000 audit: BPF prog-id=180 op=LOAD Jan 21 06:20:20.020000 audit: BPF prog-id=181 op=LOAD Jan 21 06:20:20.020000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.020000 audit: BPF prog-id=181 op=UNLOAD Jan 21 06:20:20.020000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.040000 audit: BPF prog-id=182 op=LOAD Jan 21 06:20:20.040000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.040000 audit: BPF prog-id=183 op=LOAD Jan 21 06:20:20.040000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.040000 audit: BPF prog-id=183 op=UNLOAD Jan 21 06:20:20.040000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.040000 audit: BPF prog-id=182 op=UNLOAD Jan 21 06:20:20.040000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.040000 audit: BPF prog-id=184 op=LOAD Jan 21 06:20:20.040000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4480 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230633539316665663934363162613365336432653764643436343366 Jan 21 06:20:20.055526 systemd[1]: Started cri-containerd-b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff.scope - libcontainer container b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff. Jan 21 06:20:20.088523 systemd-networkd[1500]: calid48758ab45e: Link UP Jan 21 06:20:20.092533 systemd-networkd[1500]: calid48758ab45e: Gained carrier Jan 21 06:20:20.096950 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:20.135827 containerd[1588]: time="2026-01-21T06:20:20.135519473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z55s4,Uid:00cbc947-52ff-416d-bc74-328c0c5546b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec\"" Jan 21 06:20:20.144827 kubelet[2998]: E0121 06:20:20.144522 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.402 [INFO][4386] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.444 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0 calico-apiserver-76f4489f98- calico-apiserver d06b2fe8-bce2-4b8f-842a-8da146f1a644 990 0 2026-01-21 06:19:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f4489f98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76f4489f98-89ljm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid48758ab45e [] [] }} ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.444 [INFO][4386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.680 [INFO][4447] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" HandleID="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.685 [INFO][4447] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" HandleID="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76f4489f98-89ljm", "timestamp":"2026-01-21 06:20:19.680228551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.685 [INFO][4447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.685 [INFO][4447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.685 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.754 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.796 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.842 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.855 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.867 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.872 [INFO][4447] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.880 [INFO][4447] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260 Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.898 [INFO][4447] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.931 [INFO][4447] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.933 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" host="localhost" Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.940 [INFO][4447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:20.172238 containerd[1588]: 2026-01-21 06:20:19.952 [INFO][4447] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" HandleID="k8s-pod-network.1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Workload="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.008 [INFO][4386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0", GenerateName:"calico-apiserver-76f4489f98-", Namespace:"calico-apiserver", SelfLink:"", UID:"d06b2fe8-bce2-4b8f-842a-8da146f1a644", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f4489f98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76f4489f98-89ljm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid48758ab45e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.015 [INFO][4386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.015 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid48758ab45e ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.109 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.116 [INFO][4386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0", GenerateName:"calico-apiserver-76f4489f98-", Namespace:"calico-apiserver", SelfLink:"", UID:"d06b2fe8-bce2-4b8f-842a-8da146f1a644", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f4489f98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260", Pod:"calico-apiserver-76f4489f98-89ljm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid48758ab45e", MAC:"56:44:45:46:19:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:20.173547 containerd[1588]: 2026-01-21 06:20:20.161 [INFO][4386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-89ljm" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--89ljm-eth0" Jan 21 06:20:20.185913 containerd[1588]: time="2026-01-21T06:20:20.179927202Z" level=info msg="CreateContainer within sandbox \"fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 06:20:20.231757 systemd-networkd[1500]: cali07e71ecc1d0: Link UP Jan 21 06:20:20.235441 systemd-networkd[1500]: cali07e71ecc1d0: Gained carrier Jan 21 06:20:20.299000 audit: BPF prog-id=185 op=LOAD Jan 21 06:20:20.306000 audit: BPF prog-id=186 op=LOAD Jan 21 06:20:20.306000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.306000 audit: BPF prog-id=186 op=UNLOAD Jan 21 06:20:20.306000 audit[4642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.306000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.322564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069593408.mount: Deactivated successfully. Jan 21 06:20:20.324000 audit: BPF prog-id=187 op=LOAD Jan 21 06:20:20.324000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.324000 audit: BPF prog-id=188 op=LOAD Jan 21 06:20:20.324000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.324000 audit: BPF prog-id=188 op=UNLOAD Jan 21 06:20:20.324000 audit[4642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.324000 audit: BPF prog-id=187 op=UNLOAD Jan 21 06:20:20.324000 audit[4642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.324000 audit: BPF prog-id=189 op=LOAD Jan 21 06:20:20.324000 audit[4642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4629 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237343831623161613838323765656363303864303231653331383162 Jan 21 06:20:20.330916 containerd[1588]: time="2026-01-21T06:20:20.323986725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4vl7,Uid:219deac5-c979-42b1-a796-a0c185470d95,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0c591fef9461ba3e3d2e7dd4643f2fafb3b5dafb7d9975682e8182889f14b3e\"" Jan 21 06:20:20.330916 containerd[1588]: time="2026-01-21T06:20:20.325934377Z" level=info msg="Container d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:20:20.334842 containerd[1588]: time="2026-01-21T06:20:20.334811509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 06:20:20.350445 containerd[1588]: time="2026-01-21T06:20:20.350224034Z" level=info msg="CreateContainer within sandbox \"fab696b26da810a0eae17ae6d3f4e6bbc14d8ab445dd8ace00764ebc108b86ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c\"" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.467 [INFO][4380] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.533 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9p9f8-eth0 goldmane-666569f655- calico-system 18fcd4d3-26de-4ac6-99a6-06a703ea7790 991 0 2026-01-21 06:19:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9p9f8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali07e71ecc1d0 [] [] }} ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.533 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.827 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" HandleID="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.832 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" HandleID="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9p9f8", "timestamp":"2026-01-21 06:20:19.827581292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.832 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.939 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.939 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.971 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:19.996 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.029 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.075 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.091 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.091 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.104 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686 Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.147 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.187 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.188 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" host="localhost" Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.188 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:20.363779 containerd[1588]: 2026-01-21 06:20:20.189 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" HandleID="k8s-pod-network.ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Workload="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.206 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9p9f8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"18fcd4d3-26de-4ac6-99a6-06a703ea7790", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9p9f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07e71ecc1d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.206 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.206 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07e71ecc1d0 ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.242 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.243 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9p9f8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"18fcd4d3-26de-4ac6-99a6-06a703ea7790", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686", Pod:"goldmane-666569f655-9p9f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07e71ecc1d0", MAC:"9a:06:77:c6:8f:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:20.365000 containerd[1588]: 2026-01-21 06:20:20.301 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" Namespace="calico-system" Pod="goldmane-666569f655-9p9f8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9p9f8-eth0" Jan 21 06:20:20.370738 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:20.371202 containerd[1588]: time="2026-01-21T06:20:20.371057416Z" level=info msg="StartContainer for \"d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c\"" Jan 21 06:20:20.406290 containerd[1588]: time="2026-01-21T06:20:20.405883256Z" level=info msg="connecting to shim d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c" address="unix:///run/containerd/s/38d0a868a623692efc87e1695d316e2e3898bf3b980bf78ddbc48b0fdb530bb3" protocol=ttrpc version=3 Jan 21 06:20:20.406290 containerd[1588]: time="2026-01-21T06:20:20.405936761Z" level=info msg="connecting to shim 1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260" address="unix:///run/containerd/s/b32b7c56cfded6ea1944af3091846ec599bd098879ea1d2b2301f4db7979a244" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:20.428429 containerd[1588]: time="2026-01-21T06:20:20.426956242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:20.444307 containerd[1588]: time="2026-01-21T06:20:20.444008400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 06:20:20.444307 containerd[1588]: time="2026-01-21T06:20:20.444208352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:20.445025 kubelet[2998]: E0121 06:20:20.444916 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:20.445430 kubelet[2998]: E0121 06:20:20.445036 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:20.445482 kubelet[2998]: E0121 06:20:20.445314 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:20.460236 containerd[1588]: time="2026-01-21T06:20:20.459975281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 06:20:20.505051 kubelet[2998]: E0121 06:20:20.505022 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:20.508547 containerd[1588]: time="2026-01-21T06:20:20.508365149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqf7b,Uid:d179681c-11cd-468d-ad87-dad9a234715d,Namespace:kube-system,Attempt:0,}" Jan 21 06:20:20.547585 containerd[1588]: time="2026-01-21T06:20:20.547261564Z" level=info msg="connecting to shim ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686" address="unix:///run/containerd/s/593d217113c4990d9cfe8f9f7c337305bb729df9b9acd7ec3f87358bd2638e42" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:20.589162 systemd[1]: Started cri-containerd-d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c.scope - libcontainer container d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c. Jan 21 06:20:20.609601 containerd[1588]: time="2026-01-21T06:20:20.609373542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:20.633424 containerd[1588]: time="2026-01-21T06:20:20.633174962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 06:20:20.633424 containerd[1588]: time="2026-01-21T06:20:20.633353704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:20.633841 kubelet[2998]: E0121 06:20:20.633575 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:20.633906 kubelet[2998]: E0121 06:20:20.633834 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:20.634288 kubelet[2998]: E0121 06:20:20.633985 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:20.635448 kubelet[2998]: E0121 06:20:20.635380 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:20.688176 systemd-networkd[1500]: cali36a9007700f: Gained IPv6LL Jan 21 06:20:20.720777 systemd[1]: Started cri-containerd-1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260.scope - libcontainer container 1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260. Jan 21 06:20:20.741000 audit: BPF prog-id=190 op=LOAD Jan 21 06:20:20.749000 audit: BPF prog-id=191 op=LOAD Jan 21 06:20:20.749000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.749000 audit: BPF prog-id=191 op=UNLOAD Jan 21 06:20:20.749000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.754000 audit: BPF prog-id=192 op=LOAD Jan 21 06:20:20.754000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.758000 audit: BPF prog-id=193 op=LOAD Jan 21 06:20:20.758000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.758000 audit: BPF prog-id=193 op=UNLOAD Jan 21 06:20:20.758000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.758000 audit: BPF prog-id=192 op=UNLOAD Jan 21 06:20:20.758000 audit[4735]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.758000 audit: BPF prog-id=194 op=LOAD Jan 21 06:20:20.758000 audit[4735]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4445 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436326331646433393066323332393163663837626665336338633136 Jan 21 06:20:20.815486 systemd-networkd[1500]: cali3835073e662: Gained IPv6LL Jan 21 06:20:20.832000 audit: BPF prog-id=195 op=LOAD Jan 21 06:20:20.833000 audit: BPF prog-id=196 op=LOAD Jan 21 06:20:20.833000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.833000 audit: BPF prog-id=196 op=UNLOAD Jan 21 06:20:20.833000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.837000 audit: BPF prog-id=197 op=LOAD Jan 21 06:20:20.837000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.837000 audit: BPF prog-id=198 op=LOAD Jan 21 06:20:20.837000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.837000 audit: BPF prog-id=198 op=UNLOAD Jan 21 06:20:20.837000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.837000 audit: BPF prog-id=197 op=UNLOAD Jan 21 06:20:20.837000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.844000 audit: BPF prog-id=199 op=LOAD Jan 21 06:20:20.844000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4718 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:20.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165656439303936343936656163646661313030313962343561663333 Jan 21 06:20:20.855321 containerd[1588]: time="2026-01-21T06:20:20.855170811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d46b84b4-xb8qc,Uid:dfd24090-6b99-4c4c-8800-9882cbbf99e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7481b1aa8827eecc08d021e3181b9fc05cd96e6e018c2524d7cbddb9b74a6ff\"" Jan 21 06:20:20.872242 containerd[1588]: time="2026-01-21T06:20:20.866966635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 06:20:20.896548 systemd[1]: Started cri-containerd-ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686.scope - libcontainer container ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686. Jan 21 06:20:20.917972 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:20.941034 containerd[1588]: time="2026-01-21T06:20:20.940974957Z" level=info msg="StartContainer for \"d62c1dd390f23291cf87bfe3c8c16c11e55ad9de864c33ea2508be1c1d4a748c\" returns successfully" Jan 21 06:20:20.965943 containerd[1588]: time="2026-01-21T06:20:20.965424787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:20.971548 containerd[1588]: time="2026-01-21T06:20:20.971501067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 06:20:20.975198 containerd[1588]: time="2026-01-21T06:20:20.972967112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:20.976205 kubelet[2998]: E0121 06:20:20.975908 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:20.976289 kubelet[2998]: E0121 06:20:20.975960 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:20.977011 kubelet[2998]: E0121 06:20:20.976936 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:44f28ba0df244f40918e802a350f80cc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:20.983508 containerd[1588]: time="2026-01-21T06:20:20.983241748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 06:20:21.008281 systemd-networkd[1500]: cali0479b899e55: Gained IPv6LL Jan 21 06:20:21.094405 containerd[1588]: time="2026-01-21T06:20:21.080532508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:21.098609 containerd[1588]: time="2026-01-21T06:20:21.098273566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 06:20:21.098609 containerd[1588]: time="2026-01-21T06:20:21.098417695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:21.100258 kubelet[2998]: E0121 06:20:21.099418 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:21.100258 kubelet[2998]: E0121 06:20:21.099471 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:21.100258 kubelet[2998]: E0121 06:20:21.099613 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:21.101886 kubelet[2998]: E0121 06:20:21.101757 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:21.148000 audit: BPF prog-id=200 op=LOAD Jan 21 06:20:21.153000 audit: BPF prog-id=201 op=LOAD Jan 21 06:20:21.153000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.160000 audit: BPF prog-id=201 op=UNLOAD Jan 21 06:20:21.160000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.162000 audit: BPF prog-id=202 op=LOAD Jan 21 06:20:21.162000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.164000 audit: BPF prog-id=203 op=LOAD Jan 21 06:20:21.164000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.164000 audit: BPF prog-id=203 op=UNLOAD Jan 21 06:20:21.164000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.164000 audit: BPF prog-id=202 op=UNLOAD Jan 21 06:20:21.164000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.168746 containerd[1588]: time="2026-01-21T06:20:21.166399590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-89ljm,Uid:d06b2fe8-bce2-4b8f-842a-8da146f1a644,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1eed9096496eacdfa10019b45af331e9b358bd8d09813ca502d758c538c7e260\"" Jan 21 06:20:21.175790 containerd[1588]: time="2026-01-21T06:20:21.175761292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:20:21.164000 audit: BPF prog-id=204 op=LOAD Jan 21 06:20:21.164000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4755 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361376262326531373931616461613436333362346666343835333963 Jan 21 06:20:21.177000 audit: BPF prog-id=205 op=LOAD Jan 21 06:20:21.177000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522f4330 a2=98 a3=1fffffffffffffff items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.177000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.178000 audit: BPF prog-id=205 op=UNLOAD Jan 21 06:20:21.178000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff522f4300 a3=0 items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.178000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.181000 audit: BPF prog-id=206 op=LOAD Jan 21 06:20:21.181000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522f4210 a2=94 a3=3 items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.181000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.182000 audit: BPF prog-id=206 op=UNLOAD Jan 21 06:20:21.182000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff522f4210 a2=94 a3=3 items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.182000 audit: BPF prog-id=207 op=LOAD Jan 21 06:20:21.182000 audit[4905]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff522f4250 a2=94 a3=7fff522f4430 items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.182000 audit: BPF prog-id=207 op=UNLOAD Jan 21 06:20:21.182000 audit[4905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff522f4250 a2=94 a3=7fff522f4430 items=0 ppid=4513 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.182000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 06:20:21.211991 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:21.214000 audit: BPF prog-id=208 op=LOAD Jan 21 06:20:21.214000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2a933960 a2=98 a3=3 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.214000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.215000 audit: BPF prog-id=208 op=UNLOAD Jan 21 06:20:21.215000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc2a933930 a3=0 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.215000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.216000 audit: BPF prog-id=209 op=LOAD Jan 21 06:20:21.216000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2a933750 a2=94 a3=54428f items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.216000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.217000 audit: BPF prog-id=209 op=UNLOAD Jan 21 06:20:21.217000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc2a933750 a2=94 a3=54428f items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.217000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.217000 audit: BPF prog-id=210 op=LOAD Jan 21 06:20:21.217000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2a933780 a2=94 a3=2 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.217000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.217000 audit: BPF prog-id=210 op=UNLOAD Jan 21 06:20:21.217000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc2a933780 a2=0 a3=2 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.217000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.281478 containerd[1588]: time="2026-01-21T06:20:21.281369662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:21.287060 kubelet[2998]: E0121 06:20:21.286957 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:21.301559 kubelet[2998]: E0121 06:20:21.301326 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:21.313773 containerd[1588]: time="2026-01-21T06:20:21.313322176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:20:21.313773 containerd[1588]: time="2026-01-21T06:20:21.313419407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:21.313942 kubelet[2998]: E0121 06:20:21.313513 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:21.313942 kubelet[2998]: E0121 06:20:21.313553 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:21.316760 kubelet[2998]: E0121 06:20:21.315436 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:21.319041 kubelet[2998]: E0121 06:20:21.319003 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:21.327596 kubelet[2998]: E0121 06:20:21.327568 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:21.348550 systemd-networkd[1500]: cali50b692ea130: Link UP Jan 21 06:20:21.350957 systemd-networkd[1500]: cali50b692ea130: Gained carrier Jan 21 06:20:21.392000 audit[4919]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:21.392000 audit[4919]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf3b08e60 a2=0 a3=7ffcf3b08e4c items=0 ppid=3160 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:21.410889 containerd[1588]: time="2026-01-21T06:20:21.410521993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9p9f8,Uid:18fcd4d3-26de-4ac6-99a6-06a703ea7790,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca7bb2e1791adaa4633b4ff48539cd7f9f116f49f42b8645075856248c9a9686\"" Jan 21 06:20:21.409000 audit[4919]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:21.409000 audit[4919]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf3b08e60 a2=0 a3=0 items=0 ppid=3160 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:21.417943 kubelet[2998]: I0121 06:20:21.416548 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z55s4" podStartSLOduration=54.416530609 podStartE2EDuration="54.416530609s" podCreationTimestamp="2026-01-21 06:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:20:21.412482161 +0000 UTC m=+61.133388722" watchObservedRunningTime="2026-01-21 06:20:21.416530609 +0000 UTC m=+61.137437171" Jan 21 06:20:21.431587 containerd[1588]: time="2026-01-21T06:20:21.431263163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:20.885 [INFO][4762] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:20.968 [INFO][4762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0 coredns-674b8bbfcf- kube-system d179681c-11cd-468d-ad87-dad9a234715d 929 0 2026-01-21 06:19:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rqf7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50b692ea130 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:20.968 [INFO][4762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.136 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" HandleID="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Workload="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.137 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" HandleID="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Workload="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b1460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rqf7b", "timestamp":"2026-01-21 06:20:21.136391641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.137 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.137 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.138 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.170 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.203 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.233 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.240 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.248 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.248 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.252 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8 Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.264 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.288 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.289 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" host="localhost" Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.291 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:21.433874 containerd[1588]: 2026-01-21 06:20:21.291 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" HandleID="k8s-pod-network.db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Workload="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.334 [INFO][4762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d179681c-11cd-468d-ad87-dad9a234715d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rqf7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50b692ea130", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.334 [INFO][4762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.334 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50b692ea130 ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.372 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.377 [INFO][4762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d179681c-11cd-468d-ad87-dad9a234715d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8", Pod:"coredns-674b8bbfcf-rqf7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50b692ea130", MAC:"42:b6:74:fc:f4:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:21.435051 containerd[1588]: 2026-01-21 06:20:21.408 [INFO][4762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" Namespace="kube-system" Pod="coredns-674b8bbfcf-rqf7b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rqf7b-eth0" Jan 21 06:20:21.456779 systemd-networkd[1500]: calid48758ab45e: Gained IPv6LL Jan 21 06:20:21.472000 audit[4926]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:21.472000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffbc463190 a2=0 a3=7fffbc46317c items=0 ppid=3160 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:21.483000 audit[4926]: NETFILTER_CFG table=nat:124 family=2 entries=35 op=nft_register_chain pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:21.483000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffbc463190 a2=0 a3=7fffbc46317c items=0 ppid=3160 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:21.517054 containerd[1588]: time="2026-01-21T06:20:21.516748605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:21.520288 containerd[1588]: time="2026-01-21T06:20:21.519972160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 06:20:21.520288 containerd[1588]: time="2026-01-21T06:20:21.520157083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:21.524147 kubelet[2998]: E0121 06:20:21.521605 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:21.524147 kubelet[2998]: E0121 06:20:21.523551 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:21.526911 kubelet[2998]: E0121 06:20:21.525562 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4br69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:21.530499 kubelet[2998]: E0121 06:20:21.529960 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:21.560493 containerd[1588]: time="2026-01-21T06:20:21.560447700Z" level=info msg="connecting to shim db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8" address="unix:///run/containerd/s/a29e32803c76864719f7c54637c2da27948069e7b2ec74eaaae589157e698c54" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:21.634000 audit: BPF prog-id=211 op=LOAD Jan 21 06:20:21.634000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2a933640 a2=94 a3=1 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.634000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.634000 audit: BPF prog-id=211 op=UNLOAD Jan 21 06:20:21.634000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc2a933640 a2=94 a3=1 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.634000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.642059 systemd[1]: Started cri-containerd-db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8.scope - libcontainer container db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8. Jan 21 06:20:21.647021 systemd-networkd[1500]: cali07e71ecc1d0: Gained IPv6LL Jan 21 06:20:21.652000 audit: BPF prog-id=212 op=LOAD Jan 21 06:20:21.652000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc2a933630 a2=94 a3=4 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.652000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.652000 audit: BPF prog-id=212 op=UNLOAD Jan 21 06:20:21.652000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc2a933630 a2=0 a3=4 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.652000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.652000 audit: BPF prog-id=213 op=LOAD Jan 21 06:20:21.652000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc2a933490 a2=94 a3=5 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.652000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.654000 audit: BPF prog-id=213 op=UNLOAD Jan 21 06:20:21.654000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc2a933490 a2=0 a3=5 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.654000 audit: BPF prog-id=214 op=LOAD Jan 21 06:20:21.654000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc2a9336b0 a2=94 a3=6 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.654000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.655000 audit: BPF prog-id=214 op=UNLOAD Jan 21 06:20:21.655000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc2a9336b0 a2=0 a3=6 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.655000 audit: BPF prog-id=215 op=LOAD Jan 21 06:20:21.655000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc2a932e60 a2=94 a3=88 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.655000 audit: BPF prog-id=216 op=LOAD Jan 21 06:20:21.655000 audit[4908]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc2a932ce0 a2=94 a3=2 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.655000 audit: BPF prog-id=216 op=UNLOAD Jan 21 06:20:21.655000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc2a932d10 a2=0 a3=7ffc2a932e10 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.655000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.656000 audit: BPF prog-id=215 op=UNLOAD Jan 21 06:20:21.656000 audit[4908]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=22495d10 a2=0 a3=a6441dff49c9de94 items=0 ppid=4513 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.656000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 06:20:21.674000 audit: BPF prog-id=217 op=LOAD Jan 21 06:20:21.676000 audit: BPF prog-id=218 op=LOAD Jan 21 06:20:21.676000 audit[4947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.676000 audit: BPF prog-id=218 op=UNLOAD Jan 21 06:20:21.676000 audit[4947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.677000 audit: BPF prog-id=219 op=LOAD Jan 21 06:20:21.677000 audit[4947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.677000 audit: BPF prog-id=220 op=LOAD Jan 21 06:20:21.677000 audit[4947]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.677000 audit: BPF prog-id=220 op=UNLOAD Jan 21 06:20:21.677000 audit[4947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.677000 audit: BPF prog-id=219 op=UNLOAD Jan 21 06:20:21.677000 audit[4947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.677000 audit: BPF prog-id=221 op=LOAD Jan 21 06:20:21.677000 audit[4947]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4936 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462326266656637303631633366353932366639353130316530646461 Jan 21 06:20:21.680837 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:21.750000 audit: BPF prog-id=222 op=LOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb882b2c0 a2=98 a3=1999999999999999 items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.750000 audit: BPF prog-id=222 op=UNLOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb882b290 a3=0 items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.750000 audit: BPF prog-id=223 op=LOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb882b1a0 a2=94 a3=ffff items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.750000 audit: BPF prog-id=223 op=UNLOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb882b1a0 a2=94 a3=ffff items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.750000 audit: BPF prog-id=224 op=LOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb882b1e0 a2=94 a3=7ffeb882b3c0 items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.750000 audit: BPF prog-id=224 op=UNLOAD Jan 21 06:20:21.750000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb882b1e0 a2=94 a3=7ffeb882b3c0 items=0 ppid=4513 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 06:20:21.800533 containerd[1588]: time="2026-01-21T06:20:21.800345921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rqf7b,Uid:d179681c-11cd-468d-ad87-dad9a234715d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8\"" Jan 21 06:20:21.807031 kubelet[2998]: E0121 06:20:21.806151 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:21.819286 containerd[1588]: time="2026-01-21T06:20:21.819034738Z" level=info msg="CreateContainer within sandbox \"db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 06:20:21.872360 containerd[1588]: time="2026-01-21T06:20:21.867928119Z" level=info msg="Container b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207: CDI devices from CRI Config.CDIDevices: []" Jan 21 06:20:21.885552 containerd[1588]: time="2026-01-21T06:20:21.885310617Z" level=info msg="CreateContainer within sandbox \"db2bfef7061c3f5926f95101e0dda095a410d832c3f80bb01f7e933a674f00a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207\"" Jan 21 06:20:21.888781 containerd[1588]: time="2026-01-21T06:20:21.887517640Z" level=info msg="StartContainer for \"b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207\"" Jan 21 06:20:21.889199 containerd[1588]: time="2026-01-21T06:20:21.889171345Z" level=info msg="connecting to shim b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207" address="unix:///run/containerd/s/a29e32803c76864719f7c54637c2da27948069e7b2ec74eaaae589157e698c54" protocol=ttrpc version=3 Jan 21 06:20:21.949240 systemd[1]: Started cri-containerd-b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207.scope - libcontainer container b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207. Jan 21 06:20:21.960331 systemd-networkd[1500]: vxlan.calico: Link UP Jan 21 06:20:21.962611 systemd-networkd[1500]: vxlan.calico: Gained carrier Jan 21 06:20:21.995000 audit: BPF prog-id=225 op=LOAD Jan 21 06:20:21.996000 audit: BPF prog-id=226 op=LOAD Jan 21 06:20:21.996000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.996000 audit: BPF prog-id=226 op=UNLOAD Jan 21 06:20:21.996000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.997000 audit: BPF prog-id=227 op=LOAD Jan 21 06:20:21.997000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.997000 audit: BPF prog-id=228 op=LOAD Jan 21 06:20:21.997000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.997000 audit: BPF prog-id=228 op=UNLOAD Jan 21 06:20:21.997000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.997000 audit: BPF prog-id=227 op=UNLOAD Jan 21 06:20:21.997000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:21.997000 audit: BPF prog-id=229 op=LOAD Jan 21 06:20:21.997000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4936 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:21.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393034353138373938326363613335393738333466303061326266 Jan 21 06:20:22.070513 containerd[1588]: time="2026-01-21T06:20:22.070230692Z" level=info msg="StartContainer for \"b09045187982cca3597834f00a2bf89b9c2eba95b30babbc820018c768bbf207\" returns successfully" Jan 21 06:20:22.111786 kernel: kauditd_printk_skb: 283 callbacks suppressed Jan 21 06:20:22.111897 kernel: audit: type=1334 audit(1768976422.100:699): prog-id=230 op=LOAD Jan 21 06:20:22.111931 kernel: audit: type=1300 audit(1768976422.100:699): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffde347ce0 a2=98 a3=0 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.100000 audit: BPF prog-id=230 op=LOAD Jan 21 06:20:22.100000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffde347ce0 a2=98 a3=0 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.156764 kernel: audit: type=1327 audit(1768976422.100:699): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.100000 audit: BPF prog-id=230 op=UNLOAD Jan 21 06:20:22.164908 kernel: audit: type=1334 audit(1768976422.100:700): prog-id=230 op=UNLOAD Jan 21 06:20:22.100000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffde347cb0 a3=0 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.189860 kernel: audit: type=1300 audit(1768976422.100:700): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffde347cb0 a3=0 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.221918 kernel: audit: type=1327 audit(1768976422.100:700): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.100000 audit: BPF prog-id=231 op=LOAD Jan 21 06:20:22.100000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffde347af0 a2=94 a3=54428f items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.259979 kernel: audit: type=1334 audit(1768976422.100:701): prog-id=231 op=LOAD Jan 21 06:20:22.260322 kernel: audit: type=1300 audit(1768976422.100:701): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffde347af0 a2=94 a3=54428f items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=231 op=UNLOAD Jan 21 06:20:22.292249 kernel: audit: type=1327 audit(1768976422.100:701): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.292357 kernel: audit: type=1334 audit(1768976422.101:702): prog-id=231 op=UNLOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffde347af0 a2=94 a3=54428f items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=232 op=LOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffde347b20 a2=94 a3=2 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=232 op=UNLOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffde347b20 a2=0 a3=2 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=233 op=LOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffde3478d0 a2=94 a3=4 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=233 op=UNLOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffde3478d0 a2=94 a3=4 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=234 op=LOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffde3479d0 a2=94 a3=7fffde347b50 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.101000 audit: BPF prog-id=234 op=UNLOAD Jan 21 06:20:22.101000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffde3479d0 a2=0 a3=7fffde347b50 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.103000 audit: BPF prog-id=235 op=LOAD Jan 21 06:20:22.103000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffde347100 a2=94 a3=2 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.103000 audit: BPF prog-id=235 op=UNLOAD Jan 21 06:20:22.103000 audit[5032]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffde347100 a2=0 a3=2 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.103000 audit: BPF prog-id=236 op=LOAD Jan 21 06:20:22.103000 audit[5032]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffde347200 a2=94 a3=30 items=0 ppid=4513 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 06:20:22.226000 audit: BPF prog-id=237 op=LOAD Jan 21 06:20:22.226000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff32fc0ca0 a2=98 a3=0 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.226000 audit: BPF prog-id=237 op=UNLOAD Jan 21 06:20:22.226000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff32fc0c70 a3=0 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.226000 audit: BPF prog-id=238 op=LOAD Jan 21 06:20:22.226000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff32fc0a90 a2=94 a3=54428f items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.227000 audit: BPF prog-id=238 op=UNLOAD Jan 21 06:20:22.227000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff32fc0a90 a2=94 a3=54428f items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.227000 audit: BPF prog-id=239 op=LOAD Jan 21 06:20:22.227000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff32fc0ac0 a2=94 a3=2 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.227000 audit: BPF prog-id=239 op=UNLOAD Jan 21 06:20:22.227000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff32fc0ac0 a2=0 a3=2 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.336786 kubelet[2998]: E0121 06:20:22.336199 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:22.342514 kubelet[2998]: E0121 06:20:22.340580 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:22.343840 kubelet[2998]: E0121 06:20:22.342920 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:22.345195 kubelet[2998]: E0121 06:20:22.344795 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:22.350306 kubelet[2998]: E0121 06:20:22.350250 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:22.351969 kubelet[2998]: E0121 06:20:22.351008 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:22.433901 kubelet[2998]: I0121 06:20:22.431829 2998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rqf7b" podStartSLOduration=55.43180576 podStartE2EDuration="55.43180576s" podCreationTimestamp="2026-01-21 06:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:20:22.382514479 +0000 UTC m=+62.103421041" watchObservedRunningTime="2026-01-21 06:20:22.43180576 +0000 UTC m=+62.152712341" Jan 21 06:20:22.501000 audit[5048]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:22.501000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc0f4a1d40 a2=0 a3=7ffc0f4a1d2c items=0 ppid=3160 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:22.520000 audit[5048]: NETFILTER_CFG table=nat:126 family=2 entries=44 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:22.520000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc0f4a1d40 a2=0 a3=7ffc0f4a1d2c items=0 ppid=3160 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:22.660000 audit: BPF prog-id=240 op=LOAD Jan 21 06:20:22.660000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff32fc0980 a2=94 a3=1 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.660000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.660000 audit: BPF prog-id=240 op=UNLOAD Jan 21 06:20:22.660000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff32fc0980 a2=94 a3=1 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.660000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.677000 audit: BPF prog-id=241 op=LOAD Jan 21 06:20:22.677000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff32fc0970 a2=94 a3=4 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.677000 audit: BPF prog-id=241 op=UNLOAD Jan 21 06:20:22.677000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff32fc0970 a2=0 a3=4 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.678000 audit: BPF prog-id=242 op=LOAD Jan 21 06:20:22.678000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff32fc07d0 a2=94 a3=5 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.678000 audit: BPF prog-id=242 op=UNLOAD Jan 21 06:20:22.678000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff32fc07d0 a2=0 a3=5 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.678000 audit: BPF prog-id=243 op=LOAD Jan 21 06:20:22.678000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff32fc09f0 a2=94 a3=6 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.678000 audit: BPF prog-id=243 op=UNLOAD Jan 21 06:20:22.678000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff32fc09f0 a2=0 a3=6 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.679000 audit: BPF prog-id=244 op=LOAD Jan 21 06:20:22.679000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff32fc01a0 a2=94 a3=88 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.679000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.679000 audit: BPF prog-id=245 op=LOAD Jan 21 06:20:22.679000 audit[5043]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff32fc0020 a2=94 a3=2 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.679000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.679000 audit: BPF prog-id=245 op=UNLOAD Jan 21 06:20:22.679000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff32fc0050 a2=0 a3=7fff32fc0150 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.679000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.680000 audit: BPF prog-id=244 op=UNLOAD Jan 21 06:20:22.680000 audit[5043]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=335bcd10 a2=0 a3=783f4e8693bf0f81 items=0 ppid=4513 pid=5043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.680000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 06:20:22.735000 audit: BPF prog-id=236 op=UNLOAD Jan 21 06:20:22.735000 audit[4513]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0008cc4c0 a2=0 a3=0 items=0 ppid=4495 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.735000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 06:20:22.891000 audit[5075]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=5075 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:22.891000 audit[5075]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd1c649660 a2=0 a3=7ffd1c64964c items=0 ppid=4513 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.891000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:22.892000 audit[5076]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=5076 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:22.892000 audit[5076]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffccc8953d0 a2=0 a3=7ffccc8953bc items=0 ppid=4513 pid=5076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:22.924000 audit[5078]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:22.924000 audit[5078]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd0b3970a0 a2=0 a3=7ffd0b39708c items=0 ppid=4513 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.924000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:22.925000 audit[5077]: NETFILTER_CFG table=filter:130 family=2 entries=270 op=nft_register_chain pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:22.925000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=157804 a0=3 a1=7fff692f34d0 a2=0 a3=5585fba4c000 items=0 ppid=4513 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:22.925000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:23.247559 systemd-networkd[1500]: cali50b692ea130: Gained IPv6LL Jan 21 06:20:23.344873 kubelet[2998]: E0121 06:20:23.344754 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:23.346603 kubelet[2998]: E0121 06:20:23.346537 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:23.348339 kubelet[2998]: E0121 06:20:23.346989 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:23.449000 audit[5088]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:23.449000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcbc464090 a2=0 a3=7ffcbc46407c items=0 ppid=3160 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:23.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:23.487000 audit[5088]: NETFILTER_CFG table=nat:132 family=2 entries=56 op=nft_register_chain pid=5088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:23.487000 audit[5088]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcbc464090 a2=0 a3=7ffcbc46407c items=0 ppid=3160 pid=5088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:23.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:23.887423 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Jan 21 06:20:24.349415 kubelet[2998]: E0121 06:20:24.349292 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:25.356975 kubelet[2998]: E0121 06:20:25.356514 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:27.503318 containerd[1588]: time="2026-01-21T06:20:27.502984293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,}" Jan 21 06:20:27.762995 systemd-networkd[1500]: cali9c99c6254be: Link UP Jan 21 06:20:27.764613 systemd-networkd[1500]: cali9c99c6254be: Gained carrier Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.607 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0 calico-apiserver-76f4489f98- calico-apiserver 0928ac10-29ff-4619-8155-c160108ee532 933 0 2026-01-21 06:19:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f4489f98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76f4489f98-lvqcb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c99c6254be [] [] }} ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.608 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.678 [INFO][5105] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" HandleID="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Workload="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.678 [INFO][5105] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" HandleID="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Workload="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76f4489f98-lvqcb", "timestamp":"2026-01-21 06:20:27.678316381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.678 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.680 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.680 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.694 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.704 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.713 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.717 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.723 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.723 [INFO][5105] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.726 [INFO][5105] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0 Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.734 [INFO][5105] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.745 [INFO][5105] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.745 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" host="localhost" Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.745 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:27.798830 containerd[1588]: 2026-01-21 06:20:27.745 [INFO][5105] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" HandleID="k8s-pod-network.665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Workload="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.753 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0", GenerateName:"calico-apiserver-76f4489f98-", Namespace:"calico-apiserver", SelfLink:"", UID:"0928ac10-29ff-4619-8155-c160108ee532", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f4489f98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76f4489f98-lvqcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c99c6254be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.756 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.756 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c99c6254be ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.765 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.767 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0", GenerateName:"calico-apiserver-76f4489f98-", Namespace:"calico-apiserver", SelfLink:"", UID:"0928ac10-29ff-4619-8155-c160108ee532", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f4489f98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0", Pod:"calico-apiserver-76f4489f98-lvqcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c99c6254be", MAC:"2a:8c:fa:e4:64:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:27.800280 containerd[1588]: 2026-01-21 06:20:27.792 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" Namespace="calico-apiserver" Pod="calico-apiserver-76f4489f98-lvqcb" WorkloadEndpoint="localhost-k8s-calico--apiserver--76f4489f98--lvqcb-eth0" Jan 21 06:20:27.849821 kernel: kauditd_printk_skb: 110 callbacks suppressed Jan 21 06:20:27.849942 kernel: audit: type=1325 audit(1768976427.830:739): table=filter:133 family=2 entries=49 op=nft_register_chain pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:27.830000 audit[5125]: NETFILTER_CFG table=filter:133 family=2 entries=49 op=nft_register_chain pid=5125 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:27.830000 audit[5125]: SYSCALL arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7fff1cf6f770 a2=0 a3=7fff1cf6f75c items=0 ppid=4513 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:27.870198 containerd[1588]: time="2026-01-21T06:20:27.869872444Z" level=info msg="connecting to shim 665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0" address="unix:///run/containerd/s/0b163259d0a7744da760a7beb39517b281289b3ad4e0bb02a174f4268c61ce23" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:27.890991 kernel: audit: type=1300 audit(1768976427.830:739): arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7fff1cf6f770 a2=0 a3=7fff1cf6f75c items=0 ppid=4513 pid=5125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:27.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:27.909981 kernel: audit: type=1327 audit(1768976427.830:739): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:28.030028 systemd[1]: Started cri-containerd-665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0.scope - libcontainer container 665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0. Jan 21 06:20:28.062000 audit: BPF prog-id=246 op=LOAD Jan 21 06:20:28.063000 audit: BPF prog-id=247 op=LOAD Jan 21 06:20:28.071423 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:28.079389 kernel: audit: type=1334 audit(1768976428.062:740): prog-id=246 op=LOAD Jan 21 06:20:28.079435 kernel: audit: type=1334 audit(1768976428.063:741): prog-id=247 op=LOAD Jan 21 06:20:28.079465 kernel: audit: type=1300 audit(1768976428.063:741): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.063000 audit[5145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.124376 kernel: audit: type=1327 audit(1768976428.063:741): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.124515 kernel: audit: type=1334 audit(1768976428.063:742): prog-id=247 op=UNLOAD Jan 21 06:20:28.063000 audit: BPF prog-id=247 op=UNLOAD Jan 21 06:20:28.130470 kernel: audit: type=1300 audit(1768976428.063:742): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.063000 audit[5145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.063000 audit: BPF prog-id=248 op=LOAD Jan 21 06:20:28.174808 kernel: audit: type=1327 audit(1768976428.063:742): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.063000 audit[5145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.064000 audit: BPF prog-id=249 op=LOAD Jan 21 06:20:28.064000 audit[5145]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.064000 audit: BPF prog-id=249 op=UNLOAD Jan 21 06:20:28.064000 audit[5145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.064000 audit: BPF prog-id=248 op=UNLOAD Jan 21 06:20:28.064000 audit[5145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.064000 audit: BPF prog-id=250 op=LOAD Jan 21 06:20:28.064000 audit[5145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5135 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636356536636236313861336238396265633639613731376335623835 Jan 21 06:20:28.194896 containerd[1588]: time="2026-01-21T06:20:28.194781634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f4489f98-lvqcb,Uid:0928ac10-29ff-4619-8155-c160108ee532,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"665e6cb618a3b89bec69a717c5b851fa6573ecd48f71a794c6cf9003f62963f0\"" Jan 21 06:20:28.198969 containerd[1588]: time="2026-01-21T06:20:28.198774304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:20:28.262308 containerd[1588]: time="2026-01-21T06:20:28.261839862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:28.264235 containerd[1588]: time="2026-01-21T06:20:28.264045406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:20:28.264235 containerd[1588]: time="2026-01-21T06:20:28.264206456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:28.264783 kubelet[2998]: E0121 06:20:28.264445 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:28.264783 kubelet[2998]: E0121 06:20:28.264558 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:28.265329 kubelet[2998]: E0121 06:20:28.264811 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:28.266907 kubelet[2998]: E0121 06:20:28.266825 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:28.371321 kubelet[2998]: E0121 06:20:28.371029 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:28.436000 audit[5178]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=5178 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:28.436000 audit[5178]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff81c6490 a2=0 a3=7ffff81c647c items=0 ppid=3160 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:28.447000 audit[5178]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5178 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:20:28.447000 audit[5178]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff81c6490 a2=0 a3=7ffff81c647c items=0 ppid=3160 pid=5178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:28.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:20:29.377530 kubelet[2998]: E0121 06:20:29.376921 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:29.586069 systemd-networkd[1500]: cali9c99c6254be: Gained IPv6LL Jan 21 06:20:30.505835 containerd[1588]: time="2026-01-21T06:20:30.505464275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,}" Jan 21 06:20:30.826783 systemd-networkd[1500]: calicedc2a5786a: Link UP Jan 21 06:20:30.828382 systemd-networkd[1500]: calicedc2a5786a: Gained carrier Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.664 [INFO][5181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0 calico-kube-controllers-797d998774- calico-system 44e1484f-18ef-43d7-8551-7c92cf1926c4 930 0 2026-01-21 06:19:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:797d998774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-797d998774-t5xkn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicedc2a5786a [] [] }} ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.664 [INFO][5181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.743 [INFO][5196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" HandleID="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Workload="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.744 [INFO][5196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" HandleID="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Workload="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-797d998774-t5xkn", "timestamp":"2026-01-21 06:20:30.743602447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.744 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.744 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.744 [INFO][5196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.757 [INFO][5196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.771 [INFO][5196] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.782 [INFO][5196] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.787 [INFO][5196] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.793 [INFO][5196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.793 [INFO][5196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.797 [INFO][5196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.805 [INFO][5196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.818 [INFO][5196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.818 [INFO][5196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" host="localhost" Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.818 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 06:20:30.853906 containerd[1588]: 2026-01-21 06:20:30.818 [INFO][5196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" HandleID="k8s-pod-network.8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Workload="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.822 [INFO][5181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0", GenerateName:"calico-kube-controllers-797d998774-", Namespace:"calico-system", SelfLink:"", UID:"44e1484f-18ef-43d7-8551-7c92cf1926c4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"797d998774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-797d998774-t5xkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedc2a5786a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.822 [INFO][5181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.822 [INFO][5181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicedc2a5786a ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.827 [INFO][5181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.829 [INFO][5181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0", GenerateName:"calico-kube-controllers-797d998774-", Namespace:"calico-system", SelfLink:"", UID:"44e1484f-18ef-43d7-8551-7c92cf1926c4", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 6, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"797d998774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa", Pod:"calico-kube-controllers-797d998774-t5xkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedc2a5786a", MAC:"aa:f5:78:12:64:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 06:20:30.857980 containerd[1588]: 2026-01-21 06:20:30.846 [INFO][5181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" Namespace="calico-system" Pod="calico-kube-controllers-797d998774-t5xkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--797d998774--t5xkn-eth0" Jan 21 06:20:30.881000 audit[5214]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 06:20:30.881000 audit[5214]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffe2f8ce160 a2=0 a3=7ffe2f8ce14c items=0 ppid=4513 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:30.881000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 06:20:30.928451 containerd[1588]: time="2026-01-21T06:20:30.927287216Z" level=info msg="connecting to shim 8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa" address="unix:///run/containerd/s/38c86f87ebdcd6315c07f7c775db08e61292ce9ffb98fcb351af0872770b53e4" namespace=k8s.io protocol=ttrpc version=3 Jan 21 06:20:31.007293 systemd[1]: Started cri-containerd-8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa.scope - libcontainer container 8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa. Jan 21 06:20:31.033000 audit: BPF prog-id=251 op=LOAD Jan 21 06:20:31.036000 audit: BPF prog-id=252 op=LOAD Jan 21 06:20:31.036000 audit[5235]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.036000 audit: BPF prog-id=252 op=UNLOAD Jan 21 06:20:31.036000 audit[5235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.036000 audit: BPF prog-id=253 op=LOAD Jan 21 06:20:31.036000 audit[5235]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.037000 audit: BPF prog-id=254 op=LOAD Jan 21 06:20:31.037000 audit[5235]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.037000 audit: BPF prog-id=254 op=UNLOAD Jan 21 06:20:31.037000 audit[5235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.037000 audit: BPF prog-id=253 op=UNLOAD Jan 21 06:20:31.037000 audit[5235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.037000 audit: BPF prog-id=255 op=LOAD Jan 21 06:20:31.037000 audit[5235]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5224 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333963313961346130356465336461353965663333326165633863 Jan 21 06:20:31.040577 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 21 06:20:31.112947 containerd[1588]: time="2026-01-21T06:20:31.112262175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-797d998774-t5xkn,Uid:44e1484f-18ef-43d7-8551-7c92cf1926c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b39c19a4a05de3da59ef332aec8c68bf3961bed2d44c518bd2d396eda0a2baa\"" Jan 21 06:20:31.130006 containerd[1588]: time="2026-01-21T06:20:31.120580261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 06:20:31.199293 containerd[1588]: time="2026-01-21T06:20:31.198956372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:31.200940 containerd[1588]: time="2026-01-21T06:20:31.200801406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 06:20:31.201199 containerd[1588]: time="2026-01-21T06:20:31.200874811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:31.202407 kubelet[2998]: E0121 06:20:31.201610 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:20:31.202407 kubelet[2998]: E0121 06:20:31.201987 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:20:31.203238 kubelet[2998]: E0121 06:20:31.202425 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5pbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:31.204247 kubelet[2998]: E0121 06:20:31.203985 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:31.384800 kubelet[2998]: E0121 06:20:31.384347 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:31.887363 systemd-networkd[1500]: calicedc2a5786a: Gained IPv6LL Jan 21 06:20:32.388331 kubelet[2998]: E0121 06:20:32.388205 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:32.503588 kubelet[2998]: E0121 06:20:32.503364 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:33.507297 containerd[1588]: time="2026-01-21T06:20:33.506858462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 06:20:33.570296 containerd[1588]: time="2026-01-21T06:20:33.569842234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:33.572746 containerd[1588]: time="2026-01-21T06:20:33.572509135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 06:20:33.572867 containerd[1588]: time="2026-01-21T06:20:33.572572732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:33.573536 kubelet[2998]: E0121 06:20:33.573360 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:33.573536 kubelet[2998]: E0121 06:20:33.573411 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:33.573536 kubelet[2998]: E0121 06:20:33.573507 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:44f28ba0df244f40918e802a350f80cc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:33.576592 containerd[1588]: time="2026-01-21T06:20:33.576414775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 06:20:33.645851 containerd[1588]: time="2026-01-21T06:20:33.645240846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:33.648844 containerd[1588]: time="2026-01-21T06:20:33.648559582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 06:20:33.648844 containerd[1588]: time="2026-01-21T06:20:33.648819592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:33.649164 kubelet[2998]: E0121 06:20:33.649000 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:33.649164 kubelet[2998]: E0121 06:20:33.649058 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:33.649874 kubelet[2998]: E0121 06:20:33.649270 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:33.651943 kubelet[2998]: E0121 06:20:33.651012 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:34.508807 containerd[1588]: time="2026-01-21T06:20:34.506351840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:20:34.589584 containerd[1588]: time="2026-01-21T06:20:34.589211121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:34.591789 containerd[1588]: time="2026-01-21T06:20:34.591199994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:20:34.591789 containerd[1588]: time="2026-01-21T06:20:34.591277887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:34.592253 kubelet[2998]: E0121 06:20:34.592067 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:34.593413 kubelet[2998]: E0121 06:20:34.592442 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:34.593413 kubelet[2998]: E0121 06:20:34.592908 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:34.593908 containerd[1588]: time="2026-01-21T06:20:34.593817818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 06:20:34.596536 kubelet[2998]: E0121 06:20:34.596428 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:34.669332 containerd[1588]: time="2026-01-21T06:20:34.669173217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:34.671689 containerd[1588]: time="2026-01-21T06:20:34.671503931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 06:20:34.671827 containerd[1588]: time="2026-01-21T06:20:34.671790928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:34.672164 kubelet[2998]: E0121 06:20:34.672007 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:34.672164 kubelet[2998]: E0121 06:20:34.672048 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:34.673010 kubelet[2998]: E0121 06:20:34.672586 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4br69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:34.674379 kubelet[2998]: E0121 06:20:34.674305 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:35.505139 containerd[1588]: time="2026-01-21T06:20:35.504599322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 06:20:35.600760 containerd[1588]: time="2026-01-21T06:20:35.600283626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:35.604337 containerd[1588]: time="2026-01-21T06:20:35.603947876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 06:20:35.604337 containerd[1588]: time="2026-01-21T06:20:35.604058633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:35.604887 kubelet[2998]: E0121 06:20:35.604371 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:35.604887 kubelet[2998]: E0121 06:20:35.604433 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:35.604887 kubelet[2998]: E0121 06:20:35.604590 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:35.610578 containerd[1588]: time="2026-01-21T06:20:35.608545005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 06:20:35.697584 containerd[1588]: time="2026-01-21T06:20:35.696999900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:35.699286 containerd[1588]: time="2026-01-21T06:20:35.699187315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 06:20:35.699381 containerd[1588]: time="2026-01-21T06:20:35.699326933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:35.699569 kubelet[2998]: E0121 06:20:35.699458 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:35.699569 kubelet[2998]: E0121 06:20:35.699560 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:35.700973 kubelet[2998]: E0121 06:20:35.700776 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:35.703225 kubelet[2998]: E0121 06:20:35.703175 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:39.506270 containerd[1588]: time="2026-01-21T06:20:39.506221318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:20:39.579981 containerd[1588]: time="2026-01-21T06:20:39.579839974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:39.584288 containerd[1588]: time="2026-01-21T06:20:39.583873690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:20:39.584288 containerd[1588]: time="2026-01-21T06:20:39.584043366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:39.585463 kubelet[2998]: E0121 06:20:39.585357 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:39.587233 kubelet[2998]: E0121 06:20:39.585471 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:20:39.587233 kubelet[2998]: E0121 06:20:39.585597 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:39.587233 kubelet[2998]: E0121 06:20:39.586801 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:41.502859 kubelet[2998]: E0121 06:20:41.502550 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:43.511845 containerd[1588]: time="2026-01-21T06:20:43.510800485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 06:20:43.603752 containerd[1588]: time="2026-01-21T06:20:43.602850991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:43.614339 containerd[1588]: time="2026-01-21T06:20:43.614011220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 06:20:43.614339 containerd[1588]: time="2026-01-21T06:20:43.614104043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:43.614504 kubelet[2998]: E0121 06:20:43.614438 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:20:43.615241 kubelet[2998]: E0121 06:20:43.614510 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:20:43.616189 kubelet[2998]: E0121 06:20:43.615933 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5pbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:43.618059 kubelet[2998]: E0121 06:20:43.617972 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:43.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:43.888544 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:38052.service - OpenSSH per-connection server daemon (10.0.0.1:38052). Jan 21 06:20:43.894237 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 21 06:20:43.894332 kernel: audit: type=1130 audit(1768976443.888:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:44.127000 audit[5283]: USER_ACCT pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.129020 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 38052 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:20:44.133539 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:20:44.149593 systemd-logind[1571]: New session 9 of user core. Jan 21 06:20:44.153333 kernel: audit: type=1101 audit(1768976444.127:760): pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.130000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.178884 kernel: audit: type=1103 audit(1768976444.130:761): pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.181288 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 06:20:44.130000 audit[5283]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9a3c6c40 a2=3 a3=0 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:44.220327 kernel: audit: type=1006 audit(1768976444.130:762): pid=5283 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 21 06:20:44.220411 kernel: audit: type=1300 audit(1768976444.130:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9a3c6c40 a2=3 a3=0 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:44.221892 kernel: audit: type=1327 audit(1768976444.130:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:44.130000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:44.195000 audit[5283]: USER_START pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.263381 kernel: audit: type=1105 audit(1768976444.195:763): pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.213000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.288838 kernel: audit: type=1103 audit(1768976444.213:764): pid=5293 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.423029 sshd[5293]: Connection closed by 10.0.0.1 port 38052 Jan 21 06:20:44.424443 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Jan 21 06:20:44.426000 audit[5283]: USER_END pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.435461 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:38052.service: Deactivated successfully. Jan 21 06:20:44.440376 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 06:20:44.445378 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Jan 21 06:20:44.447544 systemd-logind[1571]: Removed session 9. Jan 21 06:20:44.427000 audit[5283]: CRED_DISP pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.492268 kernel: audit: type=1106 audit(1768976444.426:765): pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.492388 kernel: audit: type=1104 audit(1768976444.427:766): pid=5283 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:44.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:45.508256 kubelet[2998]: E0121 06:20:45.508086 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:45.512616 kubelet[2998]: E0121 06:20:45.511878 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:47.508057 kubelet[2998]: E0121 06:20:47.507793 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:20:48.509558 kubelet[2998]: E0121 06:20:48.509345 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:20:49.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.136:22-10.0.0.1:39888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:49.443052 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:39888.service - OpenSSH per-connection server daemon (10.0.0.1:39888). Jan 21 06:20:49.450921 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:20:49.451023 kernel: audit: type=1130 audit(1768976449.442:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.136:22-10.0.0.1:39888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:49.602000 audit[5331]: USER_ACCT pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.604119 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 39888 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:20:49.607859 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:20:49.630877 kernel: audit: type=1101 audit(1768976449.602:769): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.604000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.674836 kernel: audit: type=1103 audit(1768976449.604:770): pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.674951 kernel: audit: type=1006 audit(1768976449.604:771): pid=5331 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 21 06:20:49.665815 systemd-logind[1571]: New session 10 of user core. Jan 21 06:20:49.675457 kubelet[2998]: E0121 06:20:49.665295 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:49.604000 audit[5331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d858ff0 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:49.604000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:49.718825 kernel: audit: type=1300 audit(1768976449.604:771): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d858ff0 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:49.718925 kernel: audit: type=1327 audit(1768976449.604:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:49.721320 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 06:20:49.738000 audit[5331]: USER_START pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.777048 kernel: audit: type=1105 audit(1768976449.738:772): pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.751000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.802960 kernel: audit: type=1103 audit(1768976449.751:773): pid=5336 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.991418 sshd[5336]: Connection closed by 10.0.0.1 port 39888 Jan 21 06:20:49.992369 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jan 21 06:20:49.996000 audit[5331]: USER_END pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:50.036851 kernel: audit: type=1106 audit(1768976449.996:774): pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:49.996000 audit[5331]: CRED_DISP pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:50.042993 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Jan 21 06:20:50.047003 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:39888.service: Deactivated successfully. Jan 21 06:20:50.055013 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 06:20:50.059992 systemd-logind[1571]: Removed session 10. Jan 21 06:20:50.062862 kernel: audit: type=1104 audit(1768976449.996:775): pid=5331 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:50.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.136:22-10.0.0.1:39888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:53.504070 kubelet[2998]: E0121 06:20:53.503614 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:20:55.028807 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:20:55.028952 kernel: audit: type=1130 audit(1768976455.018:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.136:22-10.0.0.1:41398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:55.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.136:22-10.0.0.1:41398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:55.018527 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:41398.service - OpenSSH per-connection server daemon (10.0.0.1:41398). Jan 21 06:20:55.227000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.229029 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 41398 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:20:55.232318 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:20:55.244359 systemd-logind[1571]: New session 11 of user core. Jan 21 06:20:55.230000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.273408 kernel: audit: type=1101 audit(1768976455.227:778): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.273559 kernel: audit: type=1103 audit(1768976455.230:779): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.273606 kernel: audit: type=1006 audit(1768976455.230:780): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 21 06:20:55.230000 audit[5355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd19d13c80 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:55.313610 kernel: audit: type=1300 audit(1768976455.230:780): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd19d13c80 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:20:55.313883 kernel: audit: type=1327 audit(1768976455.230:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:55.230000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:20:55.324448 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 06:20:55.331000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.363991 kernel: audit: type=1105 audit(1768976455.331:781): pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.364107 kernel: audit: type=1103 audit(1768976455.331:782): pid=5359 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.331000 audit[5359]: CRED_ACQ pid=5359 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.513858 kubelet[2998]: E0121 06:20:55.513531 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:20:55.647391 sshd[5359]: Connection closed by 10.0.0.1 port 41398 Jan 21 06:20:55.648095 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Jan 21 06:20:55.653000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.659616 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:41398.service: Deactivated successfully. Jan 21 06:20:55.663748 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 06:20:55.673542 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Jan 21 06:20:55.653000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.682326 systemd-logind[1571]: Removed session 11. Jan 21 06:20:55.702765 kernel: audit: type=1106 audit(1768976455.653:783): pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.702840 kernel: audit: type=1104 audit(1768976455.653:784): pid=5355 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:20:55.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.136:22-10.0.0.1:41398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:20:56.506170 kubelet[2998]: E0121 06:20:56.505389 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:57.504287 kubelet[2998]: E0121 06:20:57.503779 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:20:57.507325 containerd[1588]: time="2026-01-21T06:20:57.506897768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 06:20:57.577364 containerd[1588]: time="2026-01-21T06:20:57.576028073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:57.579478 containerd[1588]: time="2026-01-21T06:20:57.579285826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 06:20:57.579478 containerd[1588]: time="2026-01-21T06:20:57.579376826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:57.579836 kubelet[2998]: E0121 06:20:57.579533 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:57.579836 kubelet[2998]: E0121 06:20:57.579590 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:20:57.579957 kubelet[2998]: E0121 06:20:57.579877 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4br69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:57.581780 kubelet[2998]: E0121 06:20:57.581089 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:20:58.505407 containerd[1588]: time="2026-01-21T06:20:58.504959300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 06:20:58.580988 containerd[1588]: time="2026-01-21T06:20:58.580901756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:58.584434 containerd[1588]: time="2026-01-21T06:20:58.583379136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 06:20:58.584434 containerd[1588]: time="2026-01-21T06:20:58.583790853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:58.584813 kubelet[2998]: E0121 06:20:58.584612 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:58.585920 kubelet[2998]: E0121 06:20:58.584822 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:20:58.585920 kubelet[2998]: E0121 06:20:58.584969 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:44f28ba0df244f40918e802a350f80cc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:58.592151 containerd[1588]: time="2026-01-21T06:20:58.592117970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 06:20:58.669357 containerd[1588]: time="2026-01-21T06:20:58.669143327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:58.671808 containerd[1588]: time="2026-01-21T06:20:58.671562438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 06:20:58.672316 containerd[1588]: time="2026-01-21T06:20:58.671957740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:58.673322 kubelet[2998]: E0121 06:20:58.672845 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:58.673322 kubelet[2998]: E0121 06:20:58.672986 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:20:58.673322 kubelet[2998]: E0121 06:20:58.673132 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:58.674417 kubelet[2998]: E0121 06:20:58.674342 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:20:59.508195 containerd[1588]: time="2026-01-21T06:20:59.508042181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 06:20:59.587096 containerd[1588]: time="2026-01-21T06:20:59.586953216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:59.590297 containerd[1588]: time="2026-01-21T06:20:59.590162121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 06:20:59.590952 containerd[1588]: time="2026-01-21T06:20:59.590352927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:59.590989 kubelet[2998]: E0121 06:20:59.590507 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:59.590989 kubelet[2998]: E0121 06:20:59.590557 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 06:20:59.590989 kubelet[2998]: E0121 06:20:59.590845 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:59.595398 containerd[1588]: time="2026-01-21T06:20:59.595071668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 06:20:59.662449 containerd[1588]: time="2026-01-21T06:20:59.662119091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:20:59.667428 containerd[1588]: time="2026-01-21T06:20:59.667064797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 06:20:59.667428 containerd[1588]: time="2026-01-21T06:20:59.667199951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 06:20:59.667568 kubelet[2998]: E0121 06:20:59.667435 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:59.667568 kubelet[2998]: E0121 06:20:59.667496 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 06:20:59.668485 kubelet[2998]: E0121 06:20:59.668290 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jskfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4vl7_calico-system(219deac5-c979-42b1-a796-a0c185470d95): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 06:20:59.670195 kubelet[2998]: E0121 06:20:59.669930 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:21:00.505989 containerd[1588]: time="2026-01-21T06:21:00.505851096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:21:00.580898 containerd[1588]: time="2026-01-21T06:21:00.580778391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:00.586309 containerd[1588]: time="2026-01-21T06:21:00.584129391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:21:00.586309 containerd[1588]: time="2026-01-21T06:21:00.584202887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:00.586447 kubelet[2998]: E0121 06:21:00.585447 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:21:00.586447 kubelet[2998]: E0121 06:21:00.585503 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:21:00.586553 kubelet[2998]: E0121 06:21:00.586483 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-89ljm_calico-apiserver(d06b2fe8-bce2-4b8f-842a-8da146f1a644): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:00.588037 kubelet[2998]: E0121 06:21:00.587771 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:21:00.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:00.665852 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Jan 21 06:21:00.685814 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:21:00.685924 kernel: audit: type=1130 audit(1768976460.665:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:00.816000 audit[5378]: USER_ACCT pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.820109 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:00.821553 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:00.835477 systemd-logind[1571]: New session 12 of user core. Jan 21 06:21:00.819000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.864393 kernel: audit: type=1101 audit(1768976460.816:787): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.864552 kernel: audit: type=1103 audit(1768976460.819:788): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.864601 kernel: audit: type=1006 audit(1768976460.819:789): pid=5378 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 21 06:21:00.819000 audit[5378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb2f94440 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:00.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:00.916532 kernel: audit: type=1300 audit(1768976460.819:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb2f94440 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:00.916801 kernel: audit: type=1327 audit(1768976460.819:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:00.918850 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 06:21:00.924000 audit[5378]: USER_START pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.956840 kernel: audit: type=1105 audit(1768976460.924:790): pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.957078 kernel: audit: type=1103 audit(1768976460.928:791): pid=5382 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:00.928000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.123891 sshd[5382]: Connection closed by 10.0.0.1 port 41406 Jan 21 06:21:01.122521 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:01.126000 audit[5378]: USER_END pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.158084 kernel: audit: type=1106 audit(1768976461.126:792): pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.158199 kernel: audit: type=1104 audit(1768976461.126:793): pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.126000 audit[5378]: CRED_DISP pid=5378 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.162872 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:41406.service: Deactivated successfully. Jan 21 06:21:01.166467 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 06:21:01.168790 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Jan 21 06:21:01.175975 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:41418.service - OpenSSH per-connection server daemon (10.0.0.1:41418). Jan 21 06:21:01.178935 systemd-logind[1571]: Removed session 12. Jan 21 06:21:01.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:01.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.136:22-10.0.0.1:41418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:01.270000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.272060 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 41418 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:01.273000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.273000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed87b77f0 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:01.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:01.276058 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:01.287494 systemd-logind[1571]: New session 13 of user core. Jan 21 06:21:01.300294 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 06:21:01.305000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.308000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.493190 systemd[1708]: Created slice background.slice - User Background Tasks Slice. Jan 21 06:21:01.497965 systemd[1708]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 21 06:21:01.535601 sshd[5402]: Connection closed by 10.0.0.1 port 41418 Jan 21 06:21:01.538121 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:01.547557 systemd[1708]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 21 06:21:01.547000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.547000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.554999 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:41420.service - OpenSSH per-connection server daemon (10.0.0.1:41420). Jan 21 06:21:01.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.136:22-10.0.0.1:41420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:01.558488 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:41418.service: Deactivated successfully. Jan 21 06:21:01.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.136:22-10.0.0.1:41418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:01.563027 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 06:21:01.571033 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Jan 21 06:21:01.573565 systemd-logind[1571]: Removed session 13. Jan 21 06:21:01.675981 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 41420 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:01.674000 audit[5413]: USER_ACCT pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.678000 audit[5413]: CRED_ACQ pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.678000 audit[5413]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffeaaf3c70 a2=3 a3=0 items=0 ppid=1 pid=5413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:01.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:01.682927 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:01.701602 systemd-logind[1571]: New session 14 of user core. Jan 21 06:21:01.706460 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 06:21:01.716000 audit[5413]: USER_START pid=5413 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.721000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.893467 sshd[5421]: Connection closed by 10.0.0.1 port 41420 Jan 21 06:21:01.893390 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:01.895000 audit[5413]: USER_END pid=5413 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.896000 audit[5413]: CRED_DISP pid=5413 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:01.899975 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:41420.service: Deactivated successfully. Jan 21 06:21:01.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.136:22-10.0.0.1:41420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:01.904355 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 06:21:01.908887 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Jan 21 06:21:01.911534 systemd-logind[1571]: Removed session 14. Jan 21 06:21:04.503763 kubelet[2998]: E0121 06:21:04.503397 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:21:05.507090 containerd[1588]: time="2026-01-21T06:21:05.506530578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 06:21:05.603372 containerd[1588]: time="2026-01-21T06:21:05.602970288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:05.604892 containerd[1588]: time="2026-01-21T06:21:05.604782588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 06:21:05.604892 containerd[1588]: time="2026-01-21T06:21:05.604838650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:05.605207 kubelet[2998]: E0121 06:21:05.605067 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:21:05.605207 kubelet[2998]: E0121 06:21:05.605116 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 06:21:05.606895 kubelet[2998]: E0121 06:21:05.605239 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76f4489f98-lvqcb_calico-apiserver(0928ac10-29ff-4619-8155-c160108ee532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:05.606895 kubelet[2998]: E0121 06:21:05.606888 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:21:06.517596 containerd[1588]: time="2026-01-21T06:21:06.517100090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 06:21:06.594770 containerd[1588]: time="2026-01-21T06:21:06.594542796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:06.597990 containerd[1588]: time="2026-01-21T06:21:06.597798094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 06:21:06.598106 containerd[1588]: time="2026-01-21T06:21:06.598037199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:06.599139 kubelet[2998]: E0121 06:21:06.598816 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:21:06.599139 kubelet[2998]: E0121 06:21:06.598873 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 06:21:06.599139 kubelet[2998]: E0121 06:21:06.599066 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5pbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-797d998774-t5xkn_calico-system(44e1484f-18ef-43d7-8551-7c92cf1926c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:06.601007 kubelet[2998]: E0121 06:21:06.600944 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:21:06.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.136:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:06.908564 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:43040.service - OpenSSH per-connection server daemon (10.0.0.1:43040). Jan 21 06:21:06.914863 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 06:21:06.914951 kernel: audit: type=1130 audit(1768976466.908:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.136:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:07.024000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.026430 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 43040 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:07.030555 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:07.040832 systemd-logind[1571]: New session 15 of user core. Jan 21 06:21:07.028000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.081813 kernel: audit: type=1101 audit(1768976467.024:814): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.081888 kernel: audit: type=1103 audit(1768976467.028:815): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.081912 kernel: audit: type=1006 audit(1768976467.028:816): pid=5443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 21 06:21:07.028000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda73161a0 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:07.119001 kernel: audit: type=1300 audit(1768976467.028:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda73161a0 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:07.119071 kernel: audit: type=1327 audit(1768976467.028:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:07.028000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:07.131189 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 06:21:07.137000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.141000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.180010 kernel: audit: type=1105 audit(1768976467.137:817): pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.180085 kernel: audit: type=1103 audit(1768976467.141:818): pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.294346 sshd[5448]: Connection closed by 10.0.0.1 port 43040 Jan 21 06:21:07.294977 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:07.297000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.302893 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:43040.service: Deactivated successfully. Jan 21 06:21:07.307404 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 06:21:07.310838 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Jan 21 06:21:07.313489 systemd-logind[1571]: Removed session 15. Jan 21 06:21:07.297000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.351448 kernel: audit: type=1106 audit(1768976467.297:819): pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.351514 kernel: audit: type=1104 audit(1768976467.297:820): pid=5443 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:07.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.136:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:09.503860 kubelet[2998]: E0121 06:21:09.503791 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:21:11.508613 kubelet[2998]: E0121 06:21:11.506582 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:21:12.314602 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:43052.service - OpenSSH per-connection server daemon (10.0.0.1:43052). Jan 21 06:21:12.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:43052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:12.320434 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:21:12.320505 kernel: audit: type=1130 audit(1768976472.314:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:43052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:12.463000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.467981 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 43052 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:12.472971 sshd-session[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:12.487230 systemd-logind[1571]: New session 16 of user core. Jan 21 06:21:12.469000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.509917 kernel: audit: type=1101 audit(1768976472.463:823): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.510034 kernel: audit: type=1103 audit(1768976472.469:824): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.523841 kernel: audit: type=1006 audit(1768976472.469:825): pid=5463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 06:21:12.523911 kernel: audit: type=1300 audit(1768976472.469:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3dbcce00 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:12.469000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3dbcce00 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:12.469000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:12.555824 kernel: audit: type=1327 audit(1768976472.469:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:12.557043 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 06:21:12.561000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.564000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.606010 kernel: audit: type=1105 audit(1768976472.561:826): pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.606103 kernel: audit: type=1103 audit(1768976472.564:827): pid=5467 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.746254 sshd[5467]: Connection closed by 10.0.0.1 port 43052 Jan 21 06:21:12.747911 sshd-session[5463]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:12.751000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.782949 kernel: audit: type=1106 audit(1768976472.751:828): pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.783079 kernel: audit: type=1104 audit(1768976472.751:829): pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.751000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:12.787957 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:43052.service: Deactivated successfully. Jan 21 06:21:12.788442 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Jan 21 06:21:12.792787 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 06:21:12.796543 systemd-logind[1571]: Removed session 16. Jan 21 06:21:12.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:43052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:13.506516 kubelet[2998]: E0121 06:21:13.505930 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:21:15.504607 kubelet[2998]: E0121 06:21:15.504262 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:21:17.769176 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:39022.service - OpenSSH per-connection server daemon (10.0.0.1:39022). Jan 21 06:21:17.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:17.776988 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:21:17.777075 kernel: audit: type=1130 audit(1768976477.768:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:17.889000 audit[5481]: USER_ACCT pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:17.890873 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 39022 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:17.895323 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:17.906159 systemd-logind[1571]: New session 17 of user core. Jan 21 06:21:17.892000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:17.939907 kernel: audit: type=1101 audit(1768976477.889:832): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:17.940040 kernel: audit: type=1103 audit(1768976477.892:833): pid=5481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:17.940097 kernel: audit: type=1006 audit(1768976477.892:834): pid=5481 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 21 06:21:17.892000 audit[5481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed60c1b60 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:17.978807 kernel: audit: type=1300 audit(1768976477.892:834): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed60c1b60 a2=3 a3=0 items=0 ppid=1 pid=5481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:17.978924 kernel: audit: type=1327 audit(1768976477.892:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:17.892000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:17.996173 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 06:21:18.005000 audit[5481]: USER_START pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.013000 audit[5485]: CRED_ACQ pid=5485 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.073818 kernel: audit: type=1105 audit(1768976478.005:835): pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.073935 kernel: audit: type=1103 audit(1768976478.013:836): pid=5485 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.320481 sshd[5485]: Connection closed by 10.0.0.1 port 39022 Jan 21 06:21:18.320952 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:18.323000 audit[5481]: USER_END pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.332319 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:39022.service: Deactivated successfully. Jan 21 06:21:18.335899 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 06:21:18.339559 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Jan 21 06:21:18.340875 systemd-logind[1571]: Removed session 17. Jan 21 06:21:18.323000 audit[5481]: CRED_DISP pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.355541 kernel: audit: type=1106 audit(1768976478.323:837): pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.355595 kernel: audit: type=1104 audit(1768976478.323:838): pid=5481 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:18.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:19.503465 kubelet[2998]: E0121 06:21:19.503340 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:21:20.505856 kubelet[2998]: E0121 06:21:20.504944 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:21:23.334996 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:39034.service - OpenSSH per-connection server daemon (10.0.0.1:39034). Jan 21 06:21:23.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:39034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:23.340600 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:21:23.340719 kernel: audit: type=1130 audit(1768976483.334:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:39034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:23.439000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.440849 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 39034 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:23.444298 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:23.452808 systemd-logind[1571]: New session 18 of user core. Jan 21 06:21:23.441000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.469793 kernel: audit: type=1101 audit(1768976483.439:841): pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.469914 kernel: audit: type=1103 audit(1768976483.441:842): pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.478077 kernel: audit: type=1006 audit(1768976483.441:843): pid=5530 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 21 06:21:23.478183 kernel: audit: type=1300 audit(1768976483.441:843): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd63c64480 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:23.441000 audit[5530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd63c64480 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:23.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:23.498374 kernel: audit: type=1327 audit(1768976483.441:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:23.503718 kubelet[2998]: E0121 06:21:23.503318 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:21:23.507788 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 06:21:23.516000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.539818 kernel: audit: type=1105 audit(1768976483.516:844): pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.523000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.563892 kernel: audit: type=1103 audit(1768976483.523:845): pid=5534 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.681735 sshd[5534]: Connection closed by 10.0.0.1 port 39034 Jan 21 06:21:23.682132 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:23.684000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.684000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.713765 kernel: audit: type=1106 audit(1768976483.684:846): pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.713862 kernel: audit: type=1104 audit(1768976483.684:847): pid=5530 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.720466 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:39034.service: Deactivated successfully. Jan 21 06:21:23.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:39034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:23.723498 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 06:21:23.725591 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Jan 21 06:21:23.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.136:22-10.0.0.1:39036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:23.731086 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:39036.service - OpenSSH per-connection server daemon (10.0.0.1:39036). Jan 21 06:21:23.733300 systemd-logind[1571]: Removed session 18. Jan 21 06:21:23.807000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.808251 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 39036 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:23.808000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.809000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff560f82b0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:23.809000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:23.811245 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:23.819886 systemd-logind[1571]: New session 19 of user core. Jan 21 06:21:23.828939 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 06:21:23.832000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:23.835000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.092798 sshd[5553]: Connection closed by 10.0.0.1 port 39036 Jan 21 06:21:24.093481 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:24.100000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.100000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.110353 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:39036.service: Deactivated successfully. Jan 21 06:21:24.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.136:22-10.0.0.1:39036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:24.114304 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 06:21:24.116398 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Jan 21 06:21:24.123529 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). Jan 21 06:21:24.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.136:22-10.0.0.1:39050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:24.125361 systemd-logind[1571]: Removed session 19. Jan 21 06:21:24.257000 audit[5564]: USER_ACCT pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.258830 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:24.259000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.260000 audit[5564]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe979ff670 a2=3 a3=0 items=0 ppid=1 pid=5564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:24.260000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:24.262723 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:24.272141 systemd-logind[1571]: New session 20 of user core. Jan 21 06:21:24.286086 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 06:21:24.291000 audit[5564]: USER_START pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.295000 audit[5568]: CRED_ACQ pid=5568 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:24.507225 kubelet[2998]: E0121 06:21:24.506469 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:21:25.020000 audit[5582]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:25.020000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc28fad880 a2=0 a3=7ffc28fad86c items=0 ppid=3160 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:25.033000 audit[5582]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:25.033000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc28fad880 a2=0 a3=0 items=0 ppid=3160 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:25.039095 sshd[5568]: Connection closed by 10.0.0.1 port 39050 Jan 21 06:21:25.039808 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:25.046000 audit[5564]: USER_END pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.046000 audit[5564]: CRED_DISP pid=5564 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.054539 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:40968.service - OpenSSH per-connection server daemon (10.0.0.1:40968). Jan 21 06:21:25.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.136:22-10.0.0.1:40968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:25.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.136:22-10.0.0.1:39050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:25.056306 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:39050.service: Deactivated successfully. Jan 21 06:21:25.062774 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 06:21:25.066809 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Jan 21 06:21:25.071258 systemd-logind[1571]: Removed session 20. Jan 21 06:21:25.090000 audit[5586]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:25.090000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffca4d00ea0 a2=0 a3=7ffca4d00e8c items=0 ppid=3160 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:25.100000 audit[5586]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:25.100000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca4d00ea0 a2=0 a3=0 items=0 ppid=3160 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:25.157000 audit[5585]: USER_ACCT pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.159038 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 40968 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:25.159000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.159000 audit[5585]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc55abd7c0 a2=3 a3=0 items=0 ppid=1 pid=5585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.159000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:25.162219 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:25.170747 systemd-logind[1571]: New session 21 of user core. Jan 21 06:21:25.179024 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 06:21:25.181000 audit[5585]: USER_START pid=5585 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.184000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.441495 sshd[5593]: Connection closed by 10.0.0.1 port 40968 Jan 21 06:21:25.442744 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:25.444000 audit[5585]: USER_END pid=5585 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.445000 audit[5585]: CRED_DISP pid=5585 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.462191 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:40968.service: Deactivated successfully. Jan 21 06:21:25.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.136:22-10.0.0.1:40968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:25.470177 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 06:21:25.474188 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Jan 21 06:21:25.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.136:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:25.479103 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). Jan 21 06:21:25.482542 systemd-logind[1571]: Removed session 21. Jan 21 06:21:25.509204 kubelet[2998]: E0121 06:21:25.509101 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:21:25.568000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.569249 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:25.570000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.570000 audit[5605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0e94b910 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:25.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:25.573098 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:25.584997 systemd-logind[1571]: New session 22 of user core. Jan 21 06:21:25.593258 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 06:21:25.602000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.609000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.742382 sshd[5609]: Connection closed by 10.0.0.1 port 40972 Jan 21 06:21:25.743048 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:25.744000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.745000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:25.750603 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:40972.service: Deactivated successfully. Jan 21 06:21:25.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.136:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:25.754510 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 06:21:25.757814 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Jan 21 06:21:25.760244 systemd-logind[1571]: Removed session 22. Jan 21 06:21:28.502659 kubelet[2998]: E0121 06:21:28.502551 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:21:30.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.136:22-10.0.0.1:40986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:30.757454 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:40986.service - OpenSSH per-connection server daemon (10.0.0.1:40986). Jan 21 06:21:30.763301 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 21 06:21:30.763394 kernel: audit: type=1130 audit(1768976490.757:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.136:22-10.0.0.1:40986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:30.871000 audit[5624]: USER_ACCT pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:30.872755 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 40986 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:30.877342 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:30.892140 systemd-logind[1571]: New session 23 of user core. Jan 21 06:21:30.874000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:30.913547 kernel: audit: type=1101 audit(1768976490.871:890): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:30.913772 kernel: audit: type=1103 audit(1768976490.874:891): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:30.913810 kernel: audit: type=1006 audit(1768976490.874:892): pid=5624 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 06:21:30.874000 audit[5624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1591f070 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:30.948748 kernel: audit: type=1300 audit(1768976490.874:892): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1591f070 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:30.949111 kernel: audit: type=1327 audit(1768976490.874:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:30.874000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:30.962051 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 06:21:30.974000 audit[5624]: USER_START pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:30.977000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.011393 kernel: audit: type=1105 audit(1768976490.974:893): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.011609 kernel: audit: type=1103 audit(1768976490.977:894): pid=5628 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.117756 sshd[5628]: Connection closed by 10.0.0.1 port 40986 Jan 21 06:21:31.117801 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:31.123000 audit[5624]: USER_END pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.130094 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Jan 21 06:21:31.131435 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:40986.service: Deactivated successfully. Jan 21 06:21:31.137076 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 06:21:31.141071 systemd-logind[1571]: Removed session 23. Jan 21 06:21:31.124000 audit[5624]: CRED_DISP pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.169233 kernel: audit: type=1106 audit(1768976491.123:895): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.169429 kernel: audit: type=1104 audit(1768976491.124:896): pid=5624 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:31.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.136:22-10.0.0.1:40986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:33.505255 kubelet[2998]: E0121 06:21:33.504315 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-797d998774-t5xkn" podUID="44e1484f-18ef-43d7-8551-7c92cf1926c4" Jan 21 06:21:33.703000 audit[5642]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:33.703000 audit[5642]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff51ed80d0 a2=0 a3=7fff51ed80bc items=0 ppid=3160 pid=5642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:33.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:33.714000 audit[5642]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 06:21:33.714000 audit[5642]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff51ed80d0 a2=0 a3=7fff51ed80bc items=0 ppid=3160 pid=5642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:33.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 06:21:35.504919 kubelet[2998]: E0121 06:21:35.504866 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-lvqcb" podUID="0928ac10-29ff-4619-8155-c160108ee532" Jan 21 06:21:36.138332 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:54512.service - OpenSSH per-connection server daemon (10.0.0.1:54512). Jan 21 06:21:36.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.136:22-10.0.0.1:54512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:36.142907 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 06:21:36.142969 kernel: audit: type=1130 audit(1768976496.137:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.136:22-10.0.0.1:54512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:36.251000 audit[5644]: USER_ACCT pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.253008 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 54512 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:36.256990 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:36.269802 systemd-logind[1571]: New session 24 of user core. Jan 21 06:21:36.254000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.303928 kernel: audit: type=1101 audit(1768976496.251:901): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.304020 kernel: audit: type=1103 audit(1768976496.254:902): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.304046 kernel: audit: type=1006 audit(1768976496.254:903): pid=5644 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 06:21:36.319452 kernel: audit: type=1300 audit(1768976496.254:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2e7e1560 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:36.254000 audit[5644]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2e7e1560 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:36.347874 kernel: audit: type=1327 audit(1768976496.254:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:36.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:36.365179 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 06:21:36.371000 audit[5644]: USER_START pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.375000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.420320 kernel: audit: type=1105 audit(1768976496.371:904): pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.420441 kernel: audit: type=1103 audit(1768976496.375:905): pid=5648 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.583372 sshd[5648]: Connection closed by 10.0.0.1 port 54512 Jan 21 06:21:36.583925 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:36.586000 audit[5644]: USER_END pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.591941 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:54512.service: Deactivated successfully. Jan 21 06:21:36.596415 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 06:21:36.598373 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Jan 21 06:21:36.602049 systemd-logind[1571]: Removed session 24. Jan 21 06:21:36.586000 audit[5644]: CRED_DISP pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.646792 kernel: audit: type=1106 audit(1768976496.586:906): pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.646911 kernel: audit: type=1104 audit(1768976496.586:907): pid=5644 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:36.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.136:22-10.0.0.1:54512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:38.506307 containerd[1588]: time="2026-01-21T06:21:38.505966215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 06:21:38.507789 kubelet[2998]: E0121 06:21:38.507411 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4vl7" podUID="219deac5-c979-42b1-a796-a0c185470d95" Jan 21 06:21:38.587958 containerd[1588]: time="2026-01-21T06:21:38.587839665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:38.591201 containerd[1588]: time="2026-01-21T06:21:38.590480978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 06:21:38.591201 containerd[1588]: time="2026-01-21T06:21:38.591083544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:38.591732 kubelet[2998]: E0121 06:21:38.591350 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:21:38.591732 kubelet[2998]: E0121 06:21:38.591414 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 06:21:38.592304 kubelet[2998]: E0121 06:21:38.592169 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4br69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9p9f8_calico-system(18fcd4d3-26de-4ac6-99a6-06a703ea7790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:38.594268 kubelet[2998]: E0121 06:21:38.594111 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9p9f8" podUID="18fcd4d3-26de-4ac6-99a6-06a703ea7790" Jan 21 06:21:39.504470 kubelet[2998]: E0121 06:21:39.504365 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76f4489f98-89ljm" podUID="d06b2fe8-bce2-4b8f-842a-8da146f1a644" Jan 21 06:21:39.506072 containerd[1588]: time="2026-01-21T06:21:39.505996400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 06:21:39.578004 containerd[1588]: time="2026-01-21T06:21:39.577860323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:39.581376 containerd[1588]: time="2026-01-21T06:21:39.581248091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 06:21:39.581376 containerd[1588]: time="2026-01-21T06:21:39.581355631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:39.583343 kubelet[2998]: E0121 06:21:39.583219 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:21:39.583343 kubelet[2998]: E0121 06:21:39.583282 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 06:21:39.583977 kubelet[2998]: E0121 06:21:39.583427 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:44f28ba0df244f40918e802a350f80cc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:39.586723 containerd[1588]: time="2026-01-21T06:21:39.586476297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 06:21:39.650788 containerd[1588]: time="2026-01-21T06:21:39.650434339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 06:21:39.653162 containerd[1588]: time="2026-01-21T06:21:39.652883033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 06:21:39.653162 containerd[1588]: time="2026-01-21T06:21:39.653003218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 06:21:39.653775 kubelet[2998]: E0121 06:21:39.653316 2998 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:21:39.653775 kubelet[2998]: E0121 06:21:39.653361 2998 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 06:21:39.653775 kubelet[2998]: E0121 06:21:39.653469 2998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nntxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69d46b84b4-xb8qc_calico-system(dfd24090-6b99-4c4c-8800-9882cbbf99e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 06:21:39.655869 kubelet[2998]: E0121 06:21:39.655533 2998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d46b84b4-xb8qc" podUID="dfd24090-6b99-4c4c-8800-9882cbbf99e5" Jan 21 06:21:41.603276 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:54526.service - OpenSSH per-connection server daemon (10.0.0.1:54526). Jan 21 06:21:41.612046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 06:21:41.612121 kernel: audit: type=1130 audit(1768976501.602:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:54526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:41.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:54526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:41.744000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.774939 kernel: audit: type=1101 audit(1768976501.744:910): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.758923 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 06:21:41.775416 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 54526 ssh2: RSA SHA256:vE9zPYrc+Z33b4XFlysvXeigfifktx1tns84exsQr8o Jan 21 06:21:41.803492 kernel: audit: type=1103 audit(1768976501.752:911): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.752000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.784442 systemd-logind[1571]: New session 25 of user core. Jan 21 06:21:41.752000 audit[5662]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb84e3fe0 a2=3 a3=0 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:41.822123 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 06:21:41.849402 kernel: audit: type=1006 audit(1768976501.752:912): pid=5662 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 06:21:41.849507 kernel: audit: type=1300 audit(1768976501.752:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb84e3fe0 a2=3 a3=0 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 06:21:41.850044 kernel: audit: type=1327 audit(1768976501.752:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:41.752000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 06:21:41.830000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.896309 kernel: audit: type=1105 audit(1768976501.830:913): pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.896410 kernel: audit: type=1103 audit(1768976501.830:914): pid=5666 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:41.830000 audit[5666]: CRED_ACQ pid=5666 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:42.043323 sshd[5666]: Connection closed by 10.0.0.1 port 54526 Jan 21 06:21:42.043955 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Jan 21 06:21:42.046000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:42.052128 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:54526.service: Deactivated successfully. Jan 21 06:21:42.060170 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 06:21:42.063425 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Jan 21 06:21:42.066395 systemd-logind[1571]: Removed session 25. Jan 21 06:21:42.076809 kernel: audit: type=1106 audit(1768976502.046:915): pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:42.046000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:42.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:54526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 06:21:42.100836 kernel: audit: type=1104 audit(1768976502.046:916): pid=5662 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 21 06:21:42.502417 kubelet[2998]: E0121 06:21:42.502254 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 21 06:21:43.503082 kubelet[2998]: E0121 06:21:43.502983 2998 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"