Jan 14 13:39:15.681515 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 11:12:50 -00 2026 Jan 14 13:39:15.681625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:39:15.681668 kernel: BIOS-provided physical RAM map: Jan 14 13:39:15.681679 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 13:39:15.681690 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 13:39:15.681701 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 13:39:15.681713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 13:39:15.681724 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 13:39:15.681826 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 13:39:15.681840 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 13:39:15.681860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 13:39:15.681870 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 13:39:15.681882 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 13:39:15.681894 kernel: NX (Execute Disable) protection: active Jan 14 13:39:15.681908 kernel: APIC: Static calls initialized Jan 14 13:39:15.681924 kernel: SMBIOS 2.8 present. Jan 14 13:39:15.681959 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 13:39:15.681971 kernel: DMI: Memory slots populated: 1/1 Jan 14 13:39:15.681983 kernel: Hypervisor detected: KVM Jan 14 13:39:15.681996 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 13:39:15.682007 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 13:39:15.682019 kernel: kvm-clock: using sched offset of 11537948458 cycles Jan 14 13:39:15.682032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 13:39:15.682045 kernel: tsc: Detected 2445.424 MHz processor Jan 14 13:39:15.682062 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:39:15.682075 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:39:15.682087 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 13:39:15.682099 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 13:39:15.682113 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:39:15.682126 kernel: Using GB pages for direct mapping Jan 14 13:39:15.682138 kernel: ACPI: Early table checksum verification disabled Jan 14 13:39:15.682154 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 13:39:15.682166 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682179 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682192 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682204 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 13:39:15.682217 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682229 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682247 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682259 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:39:15.682279 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 13:39:15.682291 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 13:39:15.682305 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 13:39:15.682318 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 13:39:15.682336 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 13:39:15.682348 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 13:39:15.682361 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 13:39:15.682374 kernel: No NUMA configuration found Jan 14 13:39:15.682387 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 13:39:15.682401 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 13:39:15.682418 kernel: Zone ranges: Jan 14 13:39:15.682432 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:39:15.682444 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 13:39:15.682456 kernel: Normal empty Jan 14 13:39:15.682470 kernel: Device empty Jan 14 13:39:15.682483 kernel: Movable zone start for each node Jan 14 13:39:15.682495 kernel: Early memory node ranges Jan 14 13:39:15.682508 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 13:39:15.682552 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 13:39:15.682566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 13:39:15.682579 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:39:15.682592 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 13:39:15.682626 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 13:39:15.682639 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 13:39:15.682653 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 13:39:15.682671 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:39:15.682684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 13:39:15.682718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 13:39:15.682733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:39:15.682818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 13:39:15.682834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 13:39:15.682847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:39:15.682865 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 13:39:15.682878 kernel: TSC deadline timer available Jan 14 13:39:15.682892 kernel: CPU topo: Max. logical packages: 1 Jan 14 13:39:15.682905 kernel: CPU topo: Max. logical dies: 1 Jan 14 13:39:15.682918 kernel: CPU topo: Max. dies per package: 1 Jan 14 13:39:15.682931 kernel: CPU topo: Max. threads per core: 1 Jan 14 13:39:15.682944 kernel: CPU topo: Num. cores per package: 4 Jan 14 13:39:15.682956 kernel: CPU topo: Num. threads per package: 4 Jan 14 13:39:15.682973 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 13:39:15.682987 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 13:39:15.683001 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 13:39:15.683014 kernel: kvm-guest: setup PV sched yield Jan 14 13:39:15.683027 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 13:39:15.683039 kernel: Booting paravirtualized kernel on KVM Jan 14 13:39:15.683053 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:39:15.683070 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 13:39:15.683084 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 13:39:15.683097 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 13:39:15.683110 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 13:39:15.683122 kernel: kvm-guest: PV spinlocks enabled Jan 14 13:39:15.683135 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:39:15.683150 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:39:15.683195 kernel: random: crng init done Jan 14 13:39:15.683207 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:39:15.683220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 13:39:15.683234 kernel: Fallback order for Node 0: 0 Jan 14 13:39:15.683247 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 13:39:15.683259 kernel: Policy zone: DMA32 Jan 14 13:39:15.683273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:39:15.683317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 13:39:15.683329 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 13:39:15.683342 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 13:39:15.683355 kernel: Dynamic Preempt: voluntary Jan 14 13:39:15.683367 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:39:15.683381 kernel: rcu: RCU event tracing is enabled. Jan 14 13:39:15.683394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 13:39:15.683413 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:39:15.683451 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:39:15.683464 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:39:15.683477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:39:15.683491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 13:39:15.683504 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:39:15.683517 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:39:15.683530 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:39:15.683569 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 13:39:15.683583 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:39:15.683669 kernel: Console: colour VGA+ 80x25 Jan 14 13:39:15.683707 kernel: printk: legacy console [ttyS0] enabled Jan 14 13:39:15.683721 kernel: ACPI: Core revision 20240827 Jan 14 13:39:15.683736 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 13:39:15.683834 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:39:15.683870 kernel: x2apic enabled Jan 14 13:39:15.683885 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 13:39:15.686098 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 13:39:15.686122 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 13:39:15.686137 kernel: kvm-guest: setup PV IPIs Jan 14 13:39:15.686151 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 13:39:15.686172 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 14 13:39:15.686186 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 14 13:39:15.686200 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 13:39:15.686215 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 13:39:15.686229 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 13:39:15.686243 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:39:15.686256 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:39:15.686276 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 13:39:15.686289 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:39:15.686303 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 13:39:15.686317 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 13:39:15.686331 kernel: active return thunk: srso_alias_return_thunk Jan 14 13:39:15.686344 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 13:39:15.686390 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 13:39:15.686404 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:39:15.686418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:39:15.686431 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:39:15.686444 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:39:15.686458 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:39:15.686472 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 13:39:15.686514 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:39:15.686529 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:39:15.686541 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 13:39:15.686555 kernel: landlock: Up and running. Jan 14 13:39:15.686568 kernel: SELinux: Initializing. Jan 14 13:39:15.686581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:39:15.686595 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:39:15.686656 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 13:39:15.686671 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 13:39:15.686685 kernel: signal: max sigframe size: 1776 Jan 14 13:39:15.686699 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:39:15.686713 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:39:15.686728 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 13:39:15.686742 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:39:15.686835 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:39:15.686855 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:39:15.686868 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 13:39:15.686882 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 13:39:15.686896 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 14 13:39:15.686911 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 14 13:39:15.686924 kernel: devtmpfs: initialized Jan 14 13:39:15.686937 kernel: x86/mm: Memory block size: 128MB Jan 14 13:39:15.686979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:39:15.686994 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 13:39:15.687008 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:39:15.687021 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:39:15.687034 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:39:15.687048 kernel: audit: type=2000 audit(1768397947.782:1): state=initialized audit_enabled=0 res=1 Jan 14 13:39:15.687062 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:39:15.687081 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:39:15.687095 kernel: cpuidle: using governor menu Jan 14 13:39:15.687109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:39:15.687123 kernel: dca service started, version 1.12.1 Jan 14 13:39:15.687137 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 13:39:15.687150 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 13:39:15.687164 kernel: PCI: Using configuration type 1 for base access Jan 14 13:39:15.687182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:39:15.687195 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:39:15.687209 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:39:15.687223 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:39:15.687237 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:39:15.687251 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:39:15.687265 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:39:15.687310 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:39:15.687324 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:39:15.687338 kernel: ACPI: Interpreter enabled Jan 14 13:39:15.687351 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 13:39:15.687365 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:39:15.687378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:39:15.687392 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 13:39:15.687410 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 13:39:15.687424 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 13:39:15.687964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 13:39:15.688311 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 13:39:15.688638 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 13:39:15.688657 kernel: PCI host bridge to bus 0000:00 Jan 14 13:39:15.689072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 13:39:15.689371 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 13:39:15.689664 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 13:39:15.690062 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 13:39:15.690361 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 13:39:15.690686 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 13:39:15.691108 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 13:39:15.691456 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 13:39:15.691872 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 13:39:15.692261 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 13:39:15.692582 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 13:39:15.693013 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 13:39:15.693333 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 13:39:15.693703 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 13:39:15.694113 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 13:39:15.694431 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 13:39:15.694824 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 13:39:15.695178 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 13:39:15.695501 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 13:39:15.695904 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 13:39:15.696321 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 13:39:15.696614 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 13:39:15.697043 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 13:39:15.697351 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 13:39:15.697611 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 13:39:15.697957 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 13:39:15.700397 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 13:39:15.701065 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 13:39:15.701447 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 13:39:15.701906 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 13:39:15.702232 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 13:39:15.702568 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 13:39:15.702949 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 13:39:15.702975 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 13:39:15.702989 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 13:39:15.703002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 13:39:15.703015 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 13:39:15.703027 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 13:39:15.703040 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 13:39:15.703053 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 13:39:15.703070 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 13:39:15.703083 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 13:39:15.703095 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 13:39:15.703108 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 13:39:15.703120 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 13:39:15.703133 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 13:39:15.703146 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 13:39:15.703162 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 13:39:15.703175 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 13:39:15.703187 kernel: iommu: Default domain type: Translated Jan 14 13:39:15.703200 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:39:15.703212 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:39:15.703224 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 13:39:15.703237 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 13:39:15.703250 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 13:39:15.703542 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 13:39:15.703905 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 13:39:15.705212 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 13:39:15.705259 kernel: vgaarb: loaded Jan 14 13:39:15.705273 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 13:39:15.705286 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 13:39:15.705354 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 13:39:15.705368 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:39:15.705381 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:39:15.705394 kernel: pnp: PnP ACPI init Jan 14 13:39:15.708247 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 13:39:15.708286 kernel: pnp: PnP ACPI: found 6 devices Jan 14 13:39:15.708300 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:39:15.708331 kernel: NET: Registered PF_INET protocol family Jan 14 13:39:15.708345 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:39:15.708358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 13:39:15.708371 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:39:15.708383 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 13:39:15.708396 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 13:39:15.708409 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 13:39:15.708462 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:39:15.708475 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:39:15.708488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:39:15.708501 kernel: NET: Registered PF_XDP protocol family Jan 14 13:39:15.708893 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 13:39:15.709166 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 13:39:15.710489 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 13:39:15.710882 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 13:39:15.711151 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 13:39:15.711415 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 13:39:15.711433 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:39:15.711446 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 14 13:39:15.711459 kernel: Initialise system trusted keyrings Jan 14 13:39:15.713320 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 13:39:15.713352 kernel: Key type asymmetric registered Jan 14 13:39:15.714742 kernel: Asymmetric key parser 'x509' registered Jan 14 13:39:15.714970 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 13:39:15.714986 kernel: io scheduler mq-deadline registered Jan 14 13:39:15.714997 kernel: io scheduler kyber registered Jan 14 13:39:15.715010 kernel: io scheduler bfq registered Jan 14 13:39:15.715023 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:39:15.715056 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 13:39:15.715069 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 13:39:15.715082 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 13:39:15.715094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:39:15.715107 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:39:15.715121 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 13:39:15.715133 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 13:39:15.715150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 13:39:15.715600 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 13:39:15.715621 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 13:39:15.715991 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 13:39:15.716723 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T13:39:11 UTC (1768397951) Jan 14 13:39:15.718483 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 13:39:15.718546 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 13:39:15.718559 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:39:15.718572 kernel: Segment Routing with IPv6 Jan 14 13:39:15.718585 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:39:15.718598 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:39:15.718611 kernel: Key type dns_resolver registered Jan 14 13:39:15.718624 kernel: IPI shorthand broadcast: enabled Jan 14 13:39:15.718667 kernel: sched_clock: Marking stable (3936015546, 855484123)->(5319552913, -528053244) Jan 14 13:39:15.718680 kernel: registered taskstats version 1 Jan 14 13:39:15.718692 kernel: Loading compiled-in X.509 certificates Jan 14 13:39:15.718706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8d0aa6f955c6f54d5fb15cad90d0ea8c698688e' Jan 14 13:39:15.718718 kernel: Demotion targets for Node 0: null Jan 14 13:39:15.718731 kernel: Key type .fscrypt registered Jan 14 13:39:15.719230 kernel: Key type fscrypt-provisioning registered Jan 14 13:39:15.719356 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:39:15.719372 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:39:15.719386 kernel: ima: No architecture policies found Jan 14 13:39:15.719399 kernel: clk: Disabling unused clocks Jan 14 13:39:15.719413 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 13:39:15.719428 kernel: Write protecting the kernel read-only data: 47104k Jan 14 13:39:15.719441 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 13:39:15.719489 kernel: Run /init as init process Jan 14 13:39:15.719503 kernel: with arguments: Jan 14 13:39:15.719517 kernel: /init Jan 14 13:39:15.719531 kernel: with environment: Jan 14 13:39:15.719563 kernel: HOME=/ Jan 14 13:39:15.719578 kernel: TERM=linux Jan 14 13:39:15.719592 kernel: SCSI subsystem initialized Jan 14 13:39:15.719605 kernel: libata version 3.00 loaded. Jan 14 13:39:15.720067 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 13:39:15.720089 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 13:39:15.720372 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 13:39:15.720658 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 13:39:15.721032 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 13:39:15.721348 kernel: scsi host0: ahci Jan 14 13:39:15.721658 kernel: scsi host1: ahci Jan 14 13:39:15.722043 kernel: scsi host2: ahci Jan 14 13:39:15.730711 kernel: scsi host3: ahci Jan 14 13:39:15.739546 kernel: scsi host4: ahci Jan 14 13:39:15.740058 kernel: scsi host5: ahci Jan 14 13:39:15.740118 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 13:39:15.740133 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 13:39:15.740147 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 13:39:15.740159 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 13:39:15.740173 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 13:39:15.740186 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 13:39:15.740288 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 13:39:15.740301 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 13:39:15.740314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 13:39:15.740327 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 13:39:15.740340 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 13:39:15.740351 kernel: ata3.00: applying bridge limits Jan 14 13:39:15.740364 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 13:39:15.740382 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 13:39:15.740395 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 13:39:15.740408 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 13:39:15.740420 kernel: ata3.00: configured for UDMA/100 Jan 14 13:39:15.740897 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 13:39:15.741293 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 13:39:15.741566 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 13:39:15.741591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 13:39:15.741604 kernel: GPT:16515071 != 27000831 Jan 14 13:39:15.741617 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 13:39:15.741630 kernel: GPT:16515071 != 27000831 Jan 14 13:39:15.741643 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 13:39:15.741655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 13:39:15.742093 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 13:39:15.742113 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:39:15.742407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 13:39:15.742425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:39:15.742439 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:39:15.742451 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 13:39:15.742463 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 13:39:15.742482 kernel: raid6: avx2x4 gen() 21713 MB/s Jan 14 13:39:15.742495 kernel: raid6: avx2x2 gen() 22125 MB/s Jan 14 13:39:15.742508 kernel: raid6: avx2x1 gen() 12852 MB/s Jan 14 13:39:15.742521 kernel: raid6: using algorithm avx2x2 gen() 22125 MB/s Jan 14 13:39:15.742534 kernel: raid6: .... xor() 16911 MB/s, rmw enabled Jan 14 13:39:15.742547 kernel: raid6: using avx2x2 recovery algorithm Jan 14 13:39:15.742560 kernel: xor: automatically using best checksumming function avx Jan 14 13:39:15.742577 kernel: hrtimer: interrupt took 2382930 ns Jan 14 13:39:15.742589 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:39:15.742603 kernel: BTRFS: device fsid a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 14 13:39:15.742620 kernel: BTRFS info (device dm-0): first mount of filesystem a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 Jan 14 13:39:15.742632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:39:15.742649 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:39:15.742663 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 13:39:15.742675 kernel: loop: module loaded Jan 14 13:39:15.742689 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 13:39:15.742705 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:39:15.742720 systemd[1]: Successfully made /usr/ read-only. Jan 14 13:39:15.742740 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:39:15.742876 systemd[1]: Detected virtualization kvm. Jan 14 13:39:15.742889 systemd[1]: Detected architecture x86-64. Jan 14 13:39:15.742903 systemd[1]: Running in initrd. Jan 14 13:39:15.742916 systemd[1]: No hostname configured, using default hostname. Jan 14 13:39:15.742928 systemd[1]: Hostname set to . Jan 14 13:39:15.742946 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:39:15.742959 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:39:15.742973 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:39:15.742987 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:39:15.743000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:39:15.743015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:39:15.743029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:39:15.743047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:39:15.743061 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:39:15.743074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:39:15.743087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:39:15.743100 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:39:15.743117 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:39:15.743131 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:39:15.743143 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:39:15.743156 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:39:15.743171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:39:15.743184 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:39:15.743196 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:39:15.743303 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:39:15.743317 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 13:39:15.743330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:39:15.743344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:39:15.743357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:39:15.743371 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:39:15.765360 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:39:15.765443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:39:15.765460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:39:15.765476 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:39:15.765518 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 13:39:15.765534 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:39:15.765550 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:39:15.765566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:39:15.765588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:39:15.765605 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:39:15.765621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:39:15.765893 systemd-journald[319]: Collecting audit messages is enabled. Jan 14 13:39:15.765962 kernel: audit: type=1130 audit(1768397955.675:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.766005 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:39:15.766028 kernel: audit: type=1130 audit(1768397955.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.766045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:39:15.766062 systemd-journald[319]: Journal started Jan 14 13:39:15.766092 systemd-journald[319]: Runtime Journal (/run/log/journal/f176b779d68b467592c6eddb01ae6289) is 6M, max 48.2M, 42.1M free. Jan 14 13:39:15.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.771063 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:39:15.771099 kernel: audit: type=1130 audit(1768397955.769:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.787072 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:39:15.802840 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:39:15.808288 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 14 13:39:16.035957 kernel: Bridge firewalling registered Jan 14 13:39:15.811986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:39:15.883559 systemd-tmpfiles[329]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 13:39:16.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.069742 kernel: audit: type=1130 audit(1768397956.035:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.083258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:39:16.109375 kernel: audit: type=1130 audit(1768397956.087:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.104880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:39:16.132213 kernel: audit: type=1130 audit(1768397956.113:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.137829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:39:16.164929 kernel: audit: type=1130 audit(1768397956.137:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.165595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:39:16.176491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:39:16.184024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:39:16.221312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:39:16.232638 kernel: audit: type=1130 audit(1768397956.220:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.238287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:39:16.243190 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:39:16.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.290263 kernel: audit: type=1130 audit(1768397956.237:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.290456 kernel: audit: type=1334 audit(1768397956.241:11): prog-id=6 op=LOAD Jan 14 13:39:16.241000 audit: BPF prog-id=6 op=LOAD Jan 14 13:39:16.304054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:39:16.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.312744 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:39:16.378691 dracut-cmdline[360]: dracut-109 Jan 14 13:39:16.395731 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:39:16.424535 systemd-resolved[350]: Positive Trust Anchors: Jan 14 13:39:16.424573 systemd-resolved[350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:39:16.424578 systemd-resolved[350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:39:16.424605 systemd-resolved[350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:39:16.515861 systemd-resolved[350]: Defaulting to hostname 'linux'. Jan 14 13:39:16.520422 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:39:16.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.529237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:39:16.730473 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:39:16.781994 kernel: iscsi: registered transport (tcp) Jan 14 13:39:16.836702 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:39:16.837090 kernel: QLogic iSCSI HBA Driver Jan 14 13:39:16.942227 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:39:17.003010 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:39:17.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.007103 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:39:17.199688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:39:17.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.207104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:39:17.213054 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:39:17.283615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:39:17.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.296000 audit: BPF prog-id=7 op=LOAD Jan 14 13:39:17.296000 audit: BPF prog-id=8 op=LOAD Jan 14 13:39:17.298452 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:39:17.412335 systemd-udevd[588]: Using default interface naming scheme 'v257'. Jan 14 13:39:17.441428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:39:17.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.466004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:39:17.527657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:39:17.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.537000 audit: BPF prog-id=9 op=LOAD Jan 14 13:39:17.541148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:39:17.550225 dracut-pre-trigger[665]: rd.md=0: removing MD RAID activation Jan 14 13:39:17.623247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:39:17.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.630362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:39:17.677356 systemd-networkd[703]: lo: Link UP Jan 14 13:39:17.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.677395 systemd-networkd[703]: lo: Gained carrier Jan 14 13:39:17.678660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:39:17.683622 systemd[1]: Reached target network.target - Network. Jan 14 13:39:17.882184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:39:17.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.892250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:39:18.398433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 13:39:18.474353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 13:39:18.497149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 13:39:18.524988 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:39:18.538611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:39:18.566841 kernel: AES CTR mode by8 optimization enabled Jan 14 13:39:18.593554 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:39:18.602205 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:39:18.623039 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 13:39:18.604699 systemd-networkd[703]: eth0: Link UP Jan 14 13:39:18.606238 systemd-networkd[703]: eth0: Gained carrier Jan 14 13:39:18.606252 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:39:18.609453 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:39:18.610507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:39:18.613011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:39:18.627087 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:39:18.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:18.678747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:39:18.816056 systemd-networkd[703]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 13:39:18.822940 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:39:18.830972 disk-uuid[837]: Primary Header is updated. Jan 14 13:39:18.830972 disk-uuid[837]: Secondary Entries is updated. Jan 14 13:39:18.830972 disk-uuid[837]: Secondary Header is updated. Jan 14 13:39:18.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:18.833070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:39:18.837355 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:39:18.846657 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:39:18.859444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:39:19.147642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:39:19.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:19.228696 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:39:19.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:19.835316 systemd-networkd[703]: eth0: Gained IPv6LL Jan 14 13:39:19.920981 disk-uuid[838]: Warning: The kernel is still using the old partition table. Jan 14 13:39:19.920981 disk-uuid[838]: The new table will be used at the next reboot or after you Jan 14 13:39:19.920981 disk-uuid[838]: run partprobe(8) or kpartx(8) Jan 14 13:39:19.920981 disk-uuid[838]: The operation has completed successfully. Jan 14 13:39:19.963316 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:39:19.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:19.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:19.964270 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:39:19.973625 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:39:20.101728 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 14 13:39:20.111576 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:39:20.111634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:39:20.120422 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:39:20.120465 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:39:20.135904 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:39:20.138849 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:39:20.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:20.160281 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:39:21.115521 ignition[879]: Ignition 2.24.0 Jan 14 13:39:21.115566 ignition[879]: Stage: fetch-offline Jan 14 13:39:21.115709 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:21.115726 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:21.116153 ignition[879]: parsed url from cmdline: "" Jan 14 13:39:21.116159 ignition[879]: no config URL provided Jan 14 13:39:21.116535 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:39:21.116554 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:39:21.116667 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 14 13:39:21.116673 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 13:39:21.140315 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 14 13:39:21.365441 ignition[879]: parsing config with SHA512: d3b8ed66800c15a4b3ed76b03e5a35b582368b10729a855c3d643b4faf750524b31fccfe9c7237adc87df40e29098c9aa384aa957fd887da57b96be6a87f9a28 Jan 14 13:39:21.373828 unknown[879]: fetched base config from "system" Jan 14 13:39:21.373838 unknown[879]: fetched user config from "qemu" Jan 14 13:39:21.374288 ignition[879]: fetch-offline: fetch-offline passed Jan 14 13:39:21.374460 ignition[879]: Ignition finished successfully Jan 14 13:39:21.380025 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:39:21.398007 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 14 13:39:21.398040 kernel: audit: type=1130 audit(1768397961.386:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:21.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:21.387184 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 13:39:21.388623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:39:21.905738 ignition[889]: Ignition 2.24.0 Jan 14 13:39:21.905847 ignition[889]: Stage: kargs Jan 14 13:39:21.906354 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:21.907964 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:21.917499 ignition[889]: kargs: kargs passed Jan 14 13:39:21.917591 ignition[889]: Ignition finished successfully Jan 14 13:39:21.924399 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:39:21.956438 kernel: audit: type=1130 audit(1768397961.923:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:21.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:21.927321 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:39:22.069525 ignition[896]: Ignition 2.24.0 Jan 14 13:39:22.069566 ignition[896]: Stage: disks Jan 14 13:39:22.070067 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:22.070084 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:22.084367 ignition[896]: disks: disks passed Jan 14 13:39:22.084469 ignition[896]: Ignition finished successfully Jan 14 13:39:22.093122 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:39:22.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.100932 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:39:22.116460 kernel: audit: type=1130 audit(1768397962.099:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.107474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:39:22.116632 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:39:22.133881 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:39:22.134048 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:39:22.169837 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:39:22.240216 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 13:39:22.248483 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:39:22.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.259709 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:39:22.271724 kernel: audit: type=1130 audit(1768397962.257:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.584878 kernel: EXT4-fs (vda9): mounted filesystem 00eaf6ed-0a89-4fef-afb6-3b81d372e1c1 r/w with ordered data mode. Quota mode: none. Jan 14 13:39:22.586174 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:39:22.589585 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:39:22.593840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:39:22.601954 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:39:22.610387 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 13:39:22.610483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:39:22.610523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:39:22.639222 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:39:22.654558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:39:22.664496 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 14 13:39:22.664527 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:39:22.664649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:39:22.677561 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:39:22.677628 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:39:22.679583 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:39:22.939542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:39:22.956015 kernel: audit: type=1130 audit(1768397962.939:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:22.942194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:39:22.957111 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:39:23.021028 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:39:23.040857 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:39:23.116214 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:39:23.130277 kernel: audit: type=1130 audit(1768397963.119:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:23.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:23.599282 ignition[1011]: INFO : Ignition 2.24.0 Jan 14 13:39:23.603495 ignition[1011]: INFO : Stage: mount Jan 14 13:39:23.606910 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:23.606910 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:23.606910 ignition[1011]: INFO : mount: mount passed Jan 14 13:39:23.606910 ignition[1011]: INFO : Ignition finished successfully Jan 14 13:39:23.639149 kernel: audit: type=1130 audit(1768397963.615:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:23.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:23.610123 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:39:23.617502 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:39:23.665259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:39:23.715530 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 14 13:39:23.715591 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:39:23.715613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:39:23.729725 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:39:23.729914 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:39:23.733197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:39:23.781471 ignition[1041]: INFO : Ignition 2.24.0 Jan 14 13:39:23.781471 ignition[1041]: INFO : Stage: files Jan 14 13:39:23.788384 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:23.788384 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:23.788384 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:39:23.788384 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:39:23.788384 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:39:23.816498 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:39:23.816498 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:39:23.816498 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:39:23.816498 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 13:39:23.816498 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 13:39:23.790863 unknown[1041]: wrote ssh authorized keys file for user: core Jan 14 13:39:23.890019 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:39:24.418593 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 13:39:24.458068 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:39:24.483672 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:39:24.483672 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:39:24.511667 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:39:24.511667 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:39:24.511667 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:39:24.511667 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:39:24.511667 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:39:24.612217 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 13:39:25.133313 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 13:39:26.878405 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:39:26.878405 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 13:39:26.889921 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 13:39:26.975875 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 13:39:26.988878 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:39:26.995599 ignition[1041]: INFO : files: files passed Jan 14 13:39:26.995599 ignition[1041]: INFO : Ignition finished successfully Jan 14 13:39:27.053441 kernel: audit: type=1130 audit(1768397967.000:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:26.993968 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:39:27.002642 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:39:27.053520 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:39:27.058519 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:39:27.072126 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:39:27.095414 kernel: audit: type=1130 audit(1768397967.073:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.095542 kernel: audit: type=1131 audit(1768397967.073:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.095877 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 13:39:27.107065 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:39:27.112899 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:39:27.112899 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:39:27.126673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:39:27.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.131674 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:39:27.149535 kernel: audit: type=1130 audit(1768397967.130:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.149975 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:39:27.245896 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:39:27.246138 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:39:27.277372 kernel: audit: type=1130 audit(1768397967.249:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.277416 kernel: audit: type=1131 audit(1768397967.249:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.250439 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:39:27.277467 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:39:27.282430 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:39:27.284049 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:39:27.341142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:39:27.356857 kernel: audit: type=1130 audit(1768397967.345:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.350058 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:39:27.390502 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:39:27.390832 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:39:27.399463 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:39:27.404285 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:39:27.421022 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:39:27.421222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:39:27.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.444525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:39:27.452417 kernel: audit: type=1131 audit(1768397967.429:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.444981 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:39:27.455929 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:39:27.459135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:39:27.476302 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:39:27.476605 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:39:27.485522 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:39:27.494257 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:39:27.502467 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:39:27.512531 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:39:27.521645 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:39:27.522959 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:39:27.523205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:39:27.552062 kernel: audit: type=1131 audit(1768397967.540:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.552345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:39:27.555973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:39:27.562480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:39:27.566045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:39:27.572140 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:39:27.572353 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:39:27.577048 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:39:27.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.577270 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:39:27.601572 kernel: audit: type=1131 audit(1768397967.575:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.587538 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:39:27.594203 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:39:27.602007 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:39:27.610675 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:39:27.614716 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:39:27.622228 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:39:27.622423 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:39:27.625052 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:39:27.625219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:39:27.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.633036 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 13:39:27.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.633207 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:39:27.636017 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:39:27.636251 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:39:27.644560 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:39:27.644753 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:39:27.659985 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:39:27.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.668524 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:39:27.668903 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:39:27.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.673189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:39:27.681680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:39:27.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.682051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:39:27.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.689126 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:39:27.689244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:39:27.695859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:39:27.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.724857 ignition[1098]: INFO : Ignition 2.24.0 Jan 14 13:39:27.724857 ignition[1098]: INFO : Stage: umount Jan 14 13:39:27.724857 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:39:27.724857 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:39:27.724857 ignition[1098]: INFO : umount: umount passed Jan 14 13:39:27.724857 ignition[1098]: INFO : Ignition finished successfully Jan 14 13:39:27.696101 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:39:27.715339 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:39:27.715519 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:39:27.729383 systemd[1]: Stopped target network.target - Network. Jan 14 13:39:27.730188 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:39:27.730329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:39:27.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.770105 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:39:27.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.770221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:39:27.774498 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:39:27.774711 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:39:27.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.793638 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:39:27.793875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:39:27.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.802353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:39:27.810560 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:39:27.816758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:39:27.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.818081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:39:27.818263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:39:27.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.824261 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:39:27.824464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:39:27.838893 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:39:27.839060 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:39:27.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.848603 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:39:27.848899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:39:27.877000 audit: BPF prog-id=6 op=UNLOAD Jan 14 13:39:27.861506 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:39:27.884000 audit: BPF prog-id=9 op=UNLOAD Jan 14 13:39:27.861721 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:39:27.878940 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 13:39:27.881327 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:39:27.881493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:39:27.903462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:39:27.910976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:39:27.911131 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:39:27.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.928929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:39:27.929122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:39:27.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.937089 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:39:27.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.937158 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:39:27.946029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:39:27.970264 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:39:27.970507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:39:27.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.986084 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:39:27.986225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:39:27.994239 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:39:28.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:27.994337 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:39:28.002062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:39:28.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.002144 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:39:28.010666 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:39:28.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.010733 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:39:28.023394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:39:28.023481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:39:28.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.038312 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:39:28.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.045370 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 13:39:28.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.045443 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:39:28.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:28.053590 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:39:28.053678 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:39:28.054499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:39:28.054576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:39:28.068865 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:39:28.069059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:39:28.077232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:39:28.077402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:39:28.082448 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:39:28.097714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:39:28.127347 systemd[1]: Switching root. Jan 14 13:39:28.173650 systemd-journald[319]: Journal stopped Jan 14 13:39:29.892084 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 14 13:39:29.892237 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:39:29.892264 kernel: SELinux: policy capability open_perms=1 Jan 14 13:39:29.892285 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:39:29.892365 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:39:29.892396 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:39:29.892417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:39:29.892435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:39:29.892456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:39:29.892475 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 13:39:29.892495 systemd[1]: Successfully loaded SELinux policy in 94.858ms. Jan 14 13:39:29.892540 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.733ms. Jan 14 13:39:29.892564 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:39:29.892584 systemd[1]: Detected virtualization kvm. Jan 14 13:39:29.892601 systemd[1]: Detected architecture x86-64. Jan 14 13:39:29.892621 systemd[1]: Detected first boot. Jan 14 13:39:29.892639 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:39:29.892658 zram_generator::config[1142]: No configuration found. Jan 14 13:39:29.892716 kernel: Guest personality initialized and is inactive Jan 14 13:39:29.892765 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 13:39:29.892868 kernel: Initialized host personality Jan 14 13:39:29.892889 kernel: NET: Registered PF_VSOCK protocol family Jan 14 13:39:29.892919 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:39:29.892941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:39:29.892961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:39:29.892987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:39:29.893011 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:39:29.893031 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:39:29.893049 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:39:29.893074 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:39:29.893093 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:39:29.893112 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:39:29.893171 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:39:29.893194 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:39:29.893215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:39:29.893236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:39:29.893258 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:39:29.893278 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:39:29.893298 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:39:29.893326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:39:29.893390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:39:29.893411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:39:29.893433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:39:29.893455 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:39:29.893475 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:39:29.893506 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:39:29.893528 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:39:29.893548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:39:29.893568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:39:29.893593 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 13:39:29.893614 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:39:29.893637 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:39:29.893662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:39:29.893683 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:39:29.893702 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 13:39:29.893720 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:39:29.893953 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 13:39:29.893984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:39:29.894008 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 13:39:29.894045 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 13:39:29.894066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:39:29.894137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:39:29.894158 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:39:29.894180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:39:29.894198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:39:29.894217 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:39:29.894277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:29.894301 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:39:29.894324 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:39:29.894416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:39:29.894440 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:39:29.894459 systemd[1]: Reached target machines.target - Containers. Jan 14 13:39:29.894480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:39:29.894506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:39:29.894528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:39:29.894548 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:39:29.894569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:39:29.894588 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:39:29.894608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:39:29.894628 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:39:29.894655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:39:29.894678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:39:29.894752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:39:29.894862 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:39:29.894887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:39:29.894907 kernel: fuse: init (API version 7.41) Jan 14 13:39:29.894932 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:39:29.894954 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:39:29.894978 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:39:29.894997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:39:29.895027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:39:29.895046 kernel: ACPI: bus type drm_connector registered Jan 14 13:39:29.895067 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:39:29.895088 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 13:39:29.895107 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:39:29.895162 systemd-journald[1228]: Collecting audit messages is enabled. Jan 14 13:39:29.895207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:29.895231 systemd-journald[1228]: Journal started Jan 14 13:39:29.895265 systemd-journald[1228]: Runtime Journal (/run/log/journal/f176b779d68b467592c6eddb01ae6289) is 6M, max 48.2M, 42.1M free. Jan 14 13:39:29.534000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 13:39:29.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.806000 audit: BPF prog-id=14 op=UNLOAD Jan 14 13:39:29.806000 audit: BPF prog-id=13 op=UNLOAD Jan 14 13:39:29.807000 audit: BPF prog-id=15 op=LOAD Jan 14 13:39:29.807000 audit: BPF prog-id=16 op=LOAD Jan 14 13:39:29.808000 audit: BPF prog-id=17 op=LOAD Jan 14 13:39:29.887000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 13:39:29.887000 audit[1228]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff15c77e30 a2=4000 a3=0 items=0 ppid=1 pid=1228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:29.887000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 13:39:29.287574 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:39:29.302596 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 13:39:29.303860 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:39:29.304474 systemd[1]: systemd-journald.service: Consumed 1.126s CPU time. Jan 14 13:39:29.914250 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:39:29.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.916710 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:39:29.921428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:39:29.926418 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:39:29.930703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:39:29.935442 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:39:29.940468 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:39:29.947245 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:39:29.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.952593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:39:29.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.958280 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:39:29.958652 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:39:29.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.964529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:39:29.965012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:39:29.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.970905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:39:29.971267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:39:29.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.976718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:39:29.977174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:39:29.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.983246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:39:29.983605 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:39:29.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.988921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:39:29.989270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:39:29.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.993846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:39:29.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:29.999474 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:39:30.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.007291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:39:30.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.014150 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 13:39:30.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.032880 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:39:30.038587 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 13:39:30.046462 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:39:30.052456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:39:30.056708 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:39:30.056758 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:39:30.062705 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 13:39:30.068923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:39:30.069089 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:39:30.071743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:39:30.078251 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:39:30.083109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:39:30.084981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:39:30.089945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:39:30.099472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:39:30.108035 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:39:30.122567 systemd-journald[1228]: Time spent on flushing to /var/log/journal/f176b779d68b467592c6eddb01ae6289 is 37.991ms for 1100 entries. Jan 14 13:39:30.122567 systemd-journald[1228]: System Journal (/var/log/journal/f176b779d68b467592c6eddb01ae6289) is 8M, max 163.5M, 155.5M free. Jan 14 13:39:30.193737 systemd-journald[1228]: Received client request to flush runtime journal. Jan 14 13:39:30.193860 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 13:39:30.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.116043 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:39:30.123997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:39:30.132019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:39:30.138848 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:39:30.147529 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:39:30.156595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:39:30.165188 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 13:39:30.182437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:39:30.195865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:39:30.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.215881 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:39:30.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.221000 audit: BPF prog-id=18 op=LOAD Jan 14 13:39:30.222000 audit: BPF prog-id=19 op=LOAD Jan 14 13:39:30.222000 audit: BPF prog-id=20 op=LOAD Jan 14 13:39:30.224212 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 13:39:30.228000 audit: BPF prog-id=21 op=LOAD Jan 14 13:39:30.233051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:39:30.238613 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 13:39:30.238987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:39:30.244654 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 13:39:30.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.252000 audit: BPF prog-id=22 op=LOAD Jan 14 13:39:30.252000 audit: BPF prog-id=23 op=LOAD Jan 14 13:39:30.252000 audit: BPF prog-id=24 op=LOAD Jan 14 13:39:30.254586 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 13:39:30.261000 audit: BPF prog-id=25 op=LOAD Jan 14 13:39:30.261000 audit: BPF prog-id=26 op=LOAD Jan 14 13:39:30.261000 audit: BPF prog-id=27 op=LOAD Jan 14 13:39:30.266663 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:39:30.306608 kernel: loop3: detected capacity change from 0 to 111560 Jan 14 13:39:30.307362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:39:30.319874 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Jan 14 13:39:30.319908 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Jan 14 13:39:30.329632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:39:30.329877 systemd-nsresourced[1283]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 13:39:30.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.335466 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 13:39:30.347500 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:39:30.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.370023 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 13:39:30.387840 kernel: loop5: detected capacity change from 0 to 50784 Jan 14 13:39:30.406942 kernel: loop6: detected capacity change from 0 to 111560 Jan 14 13:39:30.422186 (sd-merge)[1302]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 13:39:30.441927 (sd-merge)[1302]: Merged extensions into '/usr'. Jan 14 13:39:30.443708 systemd-oomd[1278]: No swap; memory pressure usage will be degraded Jan 14 13:39:30.445004 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 13:39:30.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.452652 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:39:30.452700 systemd[1]: Reloading... Jan 14 13:39:30.466674 systemd-resolved[1279]: Positive Trust Anchors: Jan 14 13:39:30.466712 systemd-resolved[1279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:39:30.466720 systemd-resolved[1279]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:39:30.466767 systemd-resolved[1279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:39:30.483888 systemd-resolved[1279]: Defaulting to hostname 'linux'. Jan 14 13:39:30.572895 zram_generator::config[1337]: No configuration found. Jan 14 13:39:30.766979 systemd[1]: Reloading finished in 313 ms. Jan 14 13:39:30.797464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:39:30.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.802500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:39:30.807022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:39:30.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:30.825393 systemd[1]: Starting ensure-sysext.service... Jan 14 13:39:30.829336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:39:30.833000 audit: BPF prog-id=28 op=LOAD Jan 14 13:39:30.833000 audit: BPF prog-id=15 op=UNLOAD Jan 14 13:39:30.833000 audit: BPF prog-id=29 op=LOAD Jan 14 13:39:30.833000 audit: BPF prog-id=30 op=LOAD Jan 14 13:39:30.833000 audit: BPF prog-id=16 op=UNLOAD Jan 14 13:39:30.833000 audit: BPF prog-id=17 op=UNLOAD Jan 14 13:39:30.835000 audit: BPF prog-id=31 op=LOAD Jan 14 13:39:30.859000 audit: BPF prog-id=21 op=UNLOAD Jan 14 13:39:30.862000 audit: BPF prog-id=32 op=LOAD Jan 14 13:39:30.862000 audit: BPF prog-id=22 op=UNLOAD Jan 14 13:39:30.862000 audit: BPF prog-id=33 op=LOAD Jan 14 13:39:30.862000 audit: BPF prog-id=34 op=LOAD Jan 14 13:39:30.862000 audit: BPF prog-id=23 op=UNLOAD Jan 14 13:39:30.862000 audit: BPF prog-id=24 op=UNLOAD Jan 14 13:39:30.864000 audit: BPF prog-id=35 op=LOAD Jan 14 13:39:30.864000 audit: BPF prog-id=25 op=UNLOAD Jan 14 13:39:30.864000 audit: BPF prog-id=36 op=LOAD Jan 14 13:39:30.864000 audit: BPF prog-id=37 op=LOAD Jan 14 13:39:30.864000 audit: BPF prog-id=26 op=UNLOAD Jan 14 13:39:30.864000 audit: BPF prog-id=27 op=UNLOAD Jan 14 13:39:30.866000 audit: BPF prog-id=38 op=LOAD Jan 14 13:39:30.866000 audit: BPF prog-id=18 op=UNLOAD Jan 14 13:39:30.866000 audit: BPF prog-id=39 op=LOAD Jan 14 13:39:30.866000 audit: BPF prog-id=40 op=LOAD Jan 14 13:39:30.866000 audit: BPF prog-id=19 op=UNLOAD Jan 14 13:39:30.866000 audit: BPF prog-id=20 op=UNLOAD Jan 14 13:39:30.874746 systemd[1]: Reload requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:39:30.874882 systemd[1]: Reloading... Jan 14 13:39:31.236862 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 13:39:31.236937 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 13:39:31.237589 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:39:31.247233 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Jan 14 13:39:31.247347 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Jan 14 13:39:31.265529 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:39:31.265853 systemd-tmpfiles[1371]: Skipping /boot Jan 14 13:39:31.285875 zram_generator::config[1409]: No configuration found. Jan 14 13:39:31.297365 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:39:31.297415 systemd-tmpfiles[1371]: Skipping /boot Jan 14 13:39:31.573118 systemd[1]: Reloading finished in 697 ms. Jan 14 13:39:31.593300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:39:31.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.600000 audit: BPF prog-id=41 op=LOAD Jan 14 13:39:31.600000 audit: BPF prog-id=28 op=UNLOAD Jan 14 13:39:31.600000 audit: BPF prog-id=42 op=LOAD Jan 14 13:39:31.600000 audit: BPF prog-id=43 op=LOAD Jan 14 13:39:31.600000 audit: BPF prog-id=29 op=UNLOAD Jan 14 13:39:31.600000 audit: BPF prog-id=30 op=UNLOAD Jan 14 13:39:31.601000 audit: BPF prog-id=44 op=LOAD Jan 14 13:39:31.601000 audit: BPF prog-id=32 op=UNLOAD Jan 14 13:39:31.601000 audit: BPF prog-id=45 op=LOAD Jan 14 13:39:31.601000 audit: BPF prog-id=46 op=LOAD Jan 14 13:39:31.601000 audit: BPF prog-id=33 op=UNLOAD Jan 14 13:39:31.601000 audit: BPF prog-id=34 op=UNLOAD Jan 14 13:39:31.602000 audit: BPF prog-id=47 op=LOAD Jan 14 13:39:31.602000 audit: BPF prog-id=31 op=UNLOAD Jan 14 13:39:31.603000 audit: BPF prog-id=48 op=LOAD Jan 14 13:39:31.603000 audit: BPF prog-id=35 op=UNLOAD Jan 14 13:39:31.603000 audit: BPF prog-id=49 op=LOAD Jan 14 13:39:31.603000 audit: BPF prog-id=50 op=LOAD Jan 14 13:39:31.603000 audit: BPF prog-id=36 op=UNLOAD Jan 14 13:39:31.603000 audit: BPF prog-id=37 op=UNLOAD Jan 14 13:39:31.604000 audit: BPF prog-id=51 op=LOAD Jan 14 13:39:31.604000 audit: BPF prog-id=38 op=UNLOAD Jan 14 13:39:31.604000 audit: BPF prog-id=52 op=LOAD Jan 14 13:39:31.604000 audit: BPF prog-id=53 op=LOAD Jan 14 13:39:31.604000 audit: BPF prog-id=39 op=UNLOAD Jan 14 13:39:31.604000 audit: BPF prog-id=40 op=UNLOAD Jan 14 13:39:31.629035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:39:31.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.646323 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:39:31.653276 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:39:31.676614 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:39:31.683312 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:39:31.686000 audit: BPF prog-id=8 op=UNLOAD Jan 14 13:39:31.686000 audit: BPF prog-id=7 op=UNLOAD Jan 14 13:39:31.687000 audit: BPF prog-id=54 op=LOAD Jan 14 13:39:31.687000 audit: BPF prog-id=55 op=LOAD Jan 14 13:39:31.695134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:39:31.704963 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:39:31.714233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:31.714418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:39:31.725244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:39:31.731978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:39:31.737000 audit[1454]: SYSTEM_BOOT pid=1454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.743125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:39:31.747999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:39:31.748294 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:39:31.748424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:39:31.748512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:31.750239 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:39:31.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.758763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:39:31.762021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:39:31.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.767388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:39:31.767709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:39:31.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.772604 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:39:31.773692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:39:31.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.785750 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Jan 14 13:39:31.792767 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:39:31.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:31.802000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 13:39:31.802000 audit[1475]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd63865780 a2=420 a3=0 items=0 ppid=1442 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:31.802000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:39:31.803892 augenrules[1475]: No rules Jan 14 13:39:31.805999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:31.806418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:39:31.810120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:39:31.818135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:39:31.827353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:39:31.832199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:39:31.835768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:39:31.836045 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:39:31.836134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:39:31.836247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:39:31.838241 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:39:31.838623 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:39:31.843728 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:39:31.850562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:39:31.855499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:39:31.859933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:39:31.865241 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:39:31.866523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:39:31.870508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:39:31.871093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:39:31.875153 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:39:31.875592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:39:31.887335 systemd[1]: Finished ensure-sysext.service. Jan 14 13:39:31.909005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:39:31.912015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:39:31.912091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:39:31.917049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 13:39:31.920407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:39:31.921323 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:39:32.456869 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:39:32.473837 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 13:39:32.500114 kernel: ACPI: button: Power Button [PWRF] Jan 14 13:39:32.567298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:39:32.576037 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:39:32.604241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:39:32.604477 systemd-networkd[1510]: lo: Link UP Jan 14 13:39:32.604483 systemd-networkd[1510]: lo: Gained carrier Jan 14 13:39:32.608681 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:39:32.608689 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:39:32.610267 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:39:32.610528 systemd-networkd[1510]: eth0: Link UP Jan 14 13:39:32.612267 systemd-networkd[1510]: eth0: Gained carrier Jan 14 13:39:32.612292 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:39:32.673535 systemd[1]: Reached target network.target - Network. Jan 14 13:39:32.927031 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 13:39:32.935016 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:39:32.958943 systemd-networkd[1510]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 13:39:32.968352 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 13:39:32.972439 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:39:34.168809 systemd-resolved[1279]: Clock change detected. Flushing caches. Jan 14 13:39:34.168918 systemd-timesyncd[1511]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 13:39:34.169851 systemd-timesyncd[1511]: Initial clock synchronization to Wed 2026-01-14 13:39:34.168521 UTC. Jan 14 13:39:34.209393 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 13:39:34.268751 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 13:39:34.272961 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 13:39:34.877187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:39:34.937390 ldconfig[1444]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:39:34.963333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:39:34.975113 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:39:35.011820 kernel: kvm_amd: TSC scaling supported Jan 14 13:39:35.011931 kernel: kvm_amd: Nested Virtualization enabled Jan 14 13:39:35.011962 kernel: kvm_amd: Nested Paging enabled Jan 14 13:39:35.015446 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 13:39:35.015493 kernel: kvm_amd: PMU virtualization is disabled Jan 14 13:39:35.073977 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:39:35.438379 kernel: EDAC MC: Ver: 3.0.0 Jan 14 13:39:35.593613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:39:35.601422 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:39:35.605509 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:39:35.610384 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:39:35.614445 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 13:39:35.617974 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:39:35.621211 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:39:35.625378 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 13:39:35.630551 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 13:39:35.645798 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:39:35.650567 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:39:35.650646 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:39:35.654763 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:39:35.660143 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:39:35.667898 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:39:35.678989 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 13:39:35.684613 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 13:39:35.689223 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 13:39:35.696939 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:39:35.700536 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 13:39:35.706317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:39:35.712114 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:39:35.716441 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:39:35.720450 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:39:35.720533 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:39:35.727635 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:39:35.735234 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:39:35.757215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:39:35.774886 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:39:35.782085 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:39:35.786064 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:39:35.815051 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 13:39:35.823812 systemd-networkd[1510]: eth0: Gained IPv6LL Jan 14 13:39:35.946647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:39:35.952629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:39:35.955843 jq[1562]: false Jan 14 13:39:35.958545 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jan 14 13:39:35.959082 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jan 14 13:39:35.959088 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:39:35.964014 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:39:35.976381 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:39:35.979516 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Jan 14 13:39:35.979516 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:39:35.979516 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Jan 14 13:39:35.979365 oslogin_cache_refresh[1564]: Failure getting users, quitting Jan 14 13:39:35.979417 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:39:35.979460 oslogin_cache_refresh[1564]: Refreshing group entry cache Jan 14 13:39:35.980443 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:39:35.981043 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:39:35.981731 extend-filesystems[1563]: Found /dev/vda6 Jan 14 13:39:35.983884 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:39:35.986464 extend-filesystems[1563]: Found /dev/vda9 Jan 14 13:39:35.989626 extend-filesystems[1563]: Checking size of /dev/vda9 Jan 14 13:39:35.990229 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:39:35.997048 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:39:36.001066 jq[1581]: true Jan 14 13:39:36.002816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:39:36.003978 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Jan 14 13:39:36.004076 oslogin_cache_refresh[1564]: Failure getting groups, quitting Jan 14 13:39:36.004171 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:39:36.004326 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:39:36.009631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:39:36.010978 extend-filesystems[1563]: Resized partition /dev/vda9 Jan 14 13:39:36.013631 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:39:36.014128 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 13:39:36.014439 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 13:39:36.018472 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:39:36.021986 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:39:36.027092 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:39:36.027438 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:39:36.029902 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 13:39:36.058253 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 13:39:36.065325 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:39:36.074004 tar[1599]: linux-amd64/LICENSE Jan 14 13:39:36.078763 tar[1599]: linux-amd64/helm Jan 14 13:39:36.078560 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 13:39:36.089595 jq[1600]: true Jan 14 13:39:36.092003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:39:36.103229 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:39:36.297017 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:39:36.297474 update_engine[1578]: I20260114 13:39:36.281565 1578 main.cc:92] Flatcar Update Engine starting Jan 14 13:39:36.343773 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 13:39:36.352802 dbus-daemon[1560]: [system] SELinux support is enabled Jan 14 13:39:36.353489 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:39:36.361858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:39:36.361888 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:39:36.367658 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:39:36.367799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:39:36.382429 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 13:39:36.382429 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 13:39:36.382429 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 13:39:36.397846 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Jan 14 13:39:36.398032 update_engine[1578]: I20260114 13:39:36.386548 1578 update_check_scheduler.cc:74] Next update check in 5m6s Jan 14 13:39:36.386310 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:39:36.394661 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:39:36.396149 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:39:36.401030 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 13:39:36.401064 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:39:36.407144 systemd-logind[1577]: New seat seat0. Jan 14 13:39:36.410432 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:39:36.418091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:39:36.433848 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 13:39:36.434211 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 13:39:36.580910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:39:36.585217 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:39:36.604319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:39:36.632203 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:39:36.645341 bash[1654]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:39:36.650810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:39:36.661832 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:39:36.673196 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:39:36.683197 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:39:36.683831 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:39:36.690984 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:39:37.052064 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:39:37.060306 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:39:37.067046 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:39:37.070509 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:39:37.413844 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:39:37.420049 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:42938.service - OpenSSH per-connection server daemon (10.0.0.1:42938). Jan 14 13:39:37.874502 containerd[1601]: time="2026-01-14T13:39:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 13:39:37.876559 containerd[1601]: time="2026-01-14T13:39:37.876331932Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 13:39:37.906252 containerd[1601]: time="2026-01-14T13:39:37.906124774Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="49.563µs" Jan 14 13:39:37.906252 containerd[1601]: time="2026-01-14T13:39:37.906197310Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 13:39:37.906378 containerd[1601]: time="2026-01-14T13:39:37.906260057Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 13:39:37.906378 containerd[1601]: time="2026-01-14T13:39:37.906274754Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 13:39:37.906599 containerd[1601]: time="2026-01-14T13:39:37.906542274Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 13:39:37.906632 containerd[1601]: time="2026-01-14T13:39:37.906616011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:39:37.906870 containerd[1601]: time="2026-01-14T13:39:37.906846131Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:39:37.906870 containerd[1601]: time="2026-01-14T13:39:37.906865096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907235 containerd[1601]: time="2026-01-14T13:39:37.907214098Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907235 containerd[1601]: time="2026-01-14T13:39:37.907234696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907282 containerd[1601]: time="2026-01-14T13:39:37.907245908Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907282 containerd[1601]: time="2026-01-14T13:39:37.907253432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907568 containerd[1601]: time="2026-01-14T13:39:37.907512615Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.907568 containerd[1601]: time="2026-01-14T13:39:37.907531220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 13:39:37.908064 containerd[1601]: time="2026-01-14T13:39:37.907972611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.909061 containerd[1601]: time="2026-01-14T13:39:37.908986554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.909286 containerd[1601]: time="2026-01-14T13:39:37.909116717Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:39:37.909286 containerd[1601]: time="2026-01-14T13:39:37.909162503Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 13:39:37.909337 containerd[1601]: time="2026-01-14T13:39:37.909306682Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 13:39:37.910288 containerd[1601]: time="2026-01-14T13:39:37.910228353Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 13:39:37.910493 containerd[1601]: time="2026-01-14T13:39:37.910428116Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926340550Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926485040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926606296Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926621675Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926659336Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 13:39:37.926767 containerd[1601]: time="2026-01-14T13:39:37.926776043Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926792184Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926802022Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926813143Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926826017Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926837298Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926847637Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926856674Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 13:39:37.926932 containerd[1601]: time="2026-01-14T13:39:37.926868366Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 13:39:37.927584 containerd[1601]: time="2026-01-14T13:39:37.927504203Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 13:39:37.927634 containerd[1601]: time="2026-01-14T13:39:37.927590053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 13:39:37.927634 containerd[1601]: time="2026-01-14T13:39:37.927623025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 13:39:37.928157 containerd[1601]: time="2026-01-14T13:39:37.927817838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 13:39:37.928157 containerd[1601]: time="2026-01-14T13:39:37.928035204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 13:39:37.928157 containerd[1601]: time="2026-01-14T13:39:37.928106398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 13:39:37.928222 containerd[1601]: time="2026-01-14T13:39:37.928178502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 13:39:37.928222 containerd[1601]: time="2026-01-14T13:39:37.928204831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 13:39:37.928289 containerd[1601]: time="2026-01-14T13:39:37.928254925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 13:39:37.928289 containerd[1601]: time="2026-01-14T13:39:37.928277737Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 13:39:37.928413 containerd[1601]: time="2026-01-14T13:39:37.928296222Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 13:39:37.928413 containerd[1601]: time="2026-01-14T13:39:37.928332901Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 13:39:37.928796 containerd[1601]: time="2026-01-14T13:39:37.928536191Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 13:39:37.928796 containerd[1601]: time="2026-01-14T13:39:37.928578699Z" level=info msg="Start snapshots syncer" Jan 14 13:39:37.929528 containerd[1601]: time="2026-01-14T13:39:37.928869132Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 13:39:37.931912 containerd[1601]: time="2026-01-14T13:39:37.930646812Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 13:39:37.931912 containerd[1601]: time="2026-01-14T13:39:37.931046017Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.931435745Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933063233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933098550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933114019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933128175Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933145627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933161737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933177257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933190902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 13:39:37.933431 containerd[1601]: time="2026-01-14T13:39:37.933205750Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934379171Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934412654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934431288Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934550521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934566100Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934585056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.934604332Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.935134972Z" level=info msg="runtime interface created" Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.935151984Z" level=info msg="created NRI interface" Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.935274763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.935299951Z" level=info msg="Connect containerd service" Jan 14 13:39:37.936381 containerd[1601]: time="2026-01-14T13:39:37.935488522Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:39:37.938166 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 42938 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:37.944024 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:37.992405 containerd[1601]: time="2026-01-14T13:39:37.992024004Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:39:38.015546 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:39:38.021580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:39:38.027511 systemd-logind[1577]: New session 1 of user core. Jan 14 13:39:38.073310 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:39:38.083062 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:39:38.117288 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:38.123967 systemd-logind[1577]: New session 2 of user core. Jan 14 13:39:38.127828 tar[1599]: linux-amd64/README.md Jan 14 13:39:38.160900 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:39:38.223134 containerd[1601]: time="2026-01-14T13:39:38.222601789Z" level=info msg="Start subscribing containerd event" Jan 14 13:39:38.223134 containerd[1601]: time="2026-01-14T13:39:38.222809657Z" level=info msg="Start recovering state" Jan 14 13:39:38.223246 containerd[1601]: time="2026-01-14T13:39:38.223192301Z" level=info msg="Start event monitor" Jan 14 13:39:38.223246 containerd[1601]: time="2026-01-14T13:39:38.223210926Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:39:38.223246 containerd[1601]: time="2026-01-14T13:39:38.223259196Z" level=info msg="Start streaming server" Jan 14 13:39:38.223384 containerd[1601]: time="2026-01-14T13:39:38.223301896Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 13:39:38.223384 containerd[1601]: time="2026-01-14T13:39:38.223312375Z" level=info msg="runtime interface starting up..." Jan 14 13:39:38.223384 containerd[1601]: time="2026-01-14T13:39:38.223334306Z" level=info msg="starting plugins..." Jan 14 13:39:38.223384 containerd[1601]: time="2026-01-14T13:39:38.223382817Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 13:39:38.229277 containerd[1601]: time="2026-01-14T13:39:38.229198006Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:39:38.229759 containerd[1601]: time="2026-01-14T13:39:38.229649769Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:39:38.232617 containerd[1601]: time="2026-01-14T13:39:38.232191445Z" level=info msg="containerd successfully booted in 0.358756s" Jan 14 13:39:38.233144 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:39:38.306579 systemd[1693]: Queued start job for default target default.target. Jan 14 13:39:38.331221 systemd[1693]: Created slice app.slice - User Application Slice. Jan 14 13:39:38.331278 systemd[1693]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 13:39:38.331293 systemd[1693]: Reached target paths.target - Paths. Jan 14 13:39:38.331356 systemd[1693]: Reached target timers.target - Timers. Jan 14 13:39:38.344594 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:39:38.351877 systemd[1693]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 13:39:38.414034 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:39:38.416269 systemd[1693]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 13:39:38.416418 systemd[1693]: Reached target sockets.target - Sockets. Jan 14 13:39:38.416471 systemd[1693]: Reached target basic.target - Basic System. Jan 14 13:39:38.416575 systemd[1693]: Reached target default.target - Main User Target. Jan 14 13:39:38.416619 systemd[1693]: Startup finished in 250ms. Jan 14 13:39:38.417195 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:39:38.436173 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:39:38.604988 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:42940.service - OpenSSH per-connection server daemon (10.0.0.1:42940). Jan 14 13:39:38.696619 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 42940 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:38.723017 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:38.761812 systemd-logind[1577]: New session 3 of user core. Jan 14 13:39:38.774021 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:39:38.801205 sshd[1722]: Connection closed by 10.0.0.1 port 42940 Jan 14 13:39:38.802155 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:39.685165 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:42940.service: Deactivated successfully. Jan 14 13:39:39.689517 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 13:39:39.693159 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 14 13:39:39.698388 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:42942.service - OpenSSH per-connection server daemon (10.0.0.1:42942). Jan 14 13:39:39.707586 systemd-logind[1577]: Removed session 3. Jan 14 13:39:39.796422 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 42942 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:39.799128 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:39.810111 systemd-logind[1577]: New session 4 of user core. Jan 14 13:39:39.819062 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:39:39.866004 sshd[1732]: Connection closed by 10.0.0.1 port 42942 Jan 14 13:39:39.866618 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:39.874053 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:42942.service: Deactivated successfully. Jan 14 13:39:39.877988 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:39:39.881647 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:39:39.883213 systemd-logind[1577]: Removed session 4. Jan 14 13:39:40.553251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:39:40.559228 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:39:40.564259 systemd[1]: Startup finished in 6.591s (kernel) + 14.167s (initrd) + 11.017s (userspace) = 31.777s. Jan 14 13:39:40.583337 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:39:42.218949 kubelet[1742]: E0114 13:39:42.218478 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:39:42.233100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:39:42.233489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:39:42.235200 systemd[1]: kubelet.service: Consumed 4.714s CPU time, 266.7M memory peak. Jan 14 13:39:49.898077 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:32816.service - OpenSSH per-connection server daemon (10.0.0.1:32816). Jan 14 13:39:50.522846 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 32816 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:50.528514 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:50.558365 systemd-logind[1577]: New session 5 of user core. Jan 14 13:39:50.573123 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:39:50.596613 sshd[1759]: Connection closed by 10.0.0.1 port 32816 Jan 14 13:39:50.597287 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:50.608331 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:32816.service: Deactivated successfully. Jan 14 13:39:50.611181 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:39:50.612519 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:39:50.616309 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:32828.service - OpenSSH per-connection server daemon (10.0.0.1:32828). Jan 14 13:39:50.617364 systemd-logind[1577]: Removed session 5. Jan 14 13:39:50.708267 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 32828 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:50.710856 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:50.718333 systemd-logind[1577]: New session 6 of user core. Jan 14 13:39:50.732030 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:39:50.761758 sshd[1769]: Connection closed by 10.0.0.1 port 32828 Jan 14 13:39:50.762386 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:50.776138 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:32828.service: Deactivated successfully. Jan 14 13:39:50.778602 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:39:50.780023 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:39:50.783334 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). Jan 14 13:39:50.784543 systemd-logind[1577]: Removed session 6. Jan 14 13:39:50.853043 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:50.855316 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:50.863173 systemd-logind[1577]: New session 7 of user core. Jan 14 13:39:50.873981 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:39:50.894824 sshd[1779]: Connection closed by 10.0.0.1 port 32836 Jan 14 13:39:50.895992 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:50.905956 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:32836.service: Deactivated successfully. Jan 14 13:39:50.908124 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:39:50.909158 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:39:50.912896 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). Jan 14 13:39:50.913515 systemd-logind[1577]: Removed session 7. Jan 14 13:39:50.994244 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:50.996160 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:51.002984 systemd-logind[1577]: New session 8 of user core. Jan 14 13:39:51.017013 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:39:51.052134 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:39:51.052908 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:39:51.067173 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 14 13:39:51.069229 sshd[1789]: Connection closed by 10.0.0.1 port 32846 Jan 14 13:39:51.069910 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:51.082909 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:32846.service: Deactivated successfully. Jan 14 13:39:51.085077 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:39:51.086319 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:39:51.090517 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:32862.service - OpenSSH per-connection server daemon (10.0.0.1:32862). Jan 14 13:39:51.091566 systemd-logind[1577]: Removed session 8. Jan 14 13:39:51.391830 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 32862 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:51.394519 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:51.404160 systemd-logind[1577]: New session 9 of user core. Jan 14 13:39:51.413977 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:39:51.439326 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:39:51.439960 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:39:51.445401 sudo[1803]: pam_unix(sudo:session): session closed for user root Jan 14 13:39:51.457597 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:39:51.458277 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:39:51.470474 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:39:51.546000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:39:51.547866 augenrules[1827]: No rules Jan 14 13:39:51.549904 kernel: kauditd_printk_skb: 171 callbacks suppressed Jan 14 13:39:51.550029 kernel: audit: type=1305 audit(1768397991.546:216): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:39:51.550130 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:39:51.550765 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:39:51.552559 sudo[1802]: pam_unix(sudo:session): session closed for user root Jan 14 13:39:51.554985 sshd[1801]: Connection closed by 10.0.0.1 port 32862 Jan 14 13:39:51.546000 audit[1827]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3a45a810 a2=420 a3=0 items=0 ppid=1808 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:51.557337 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:51.568767 kernel: audit: type=1300 audit(1768397991.546:216): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3a45a810 a2=420 a3=0 items=0 ppid=1808 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:51.568832 kernel: audit: type=1327 audit(1768397991.546:216): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:39:51.546000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:39:51.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.584270 kernel: audit: type=1130 audit(1768397991.550:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.584315 kernel: audit: type=1131 audit(1768397991.550:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.551000 audit[1802]: USER_END pid=1802 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.605235 kernel: audit: type=1106 audit(1768397991.551:219): pid=1802 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.605311 kernel: audit: type=1104 audit(1768397991.552:220): pid=1802 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.552000 audit[1802]: CRED_DISP pid=1802 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.558000 audit[1797]: USER_END pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.629248 kernel: audit: type=1106 audit(1768397991.558:221): pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.629349 kernel: audit: type=1104 audit(1768397991.558:222): pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.558000 audit[1797]: CRED_DISP pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.663294 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:32862.service: Deactivated successfully. Jan 14 13:39:51.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.106:22-10.0.0.1:32862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.674773 kernel: audit: type=1131 audit(1768397991.663:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.106:22-10.0.0.1:32862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.676490 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:39:51.678410 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:39:51.683421 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:32878.service - OpenSSH per-connection server daemon (10.0.0.1:32878). Jan 14 13:39:51.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.106:22-10.0.0.1:32878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.684865 systemd-logind[1577]: Removed session 9. Jan 14 13:39:51.750000 audit[1836]: USER_ACCT pid=1836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.751919 sshd[1836]: Accepted publickey for core from 10.0.0.1 port 32878 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:39:51.752000 audit[1836]: CRED_ACQ pid=1836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.752000 audit[1836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f458650 a2=3 a3=0 items=0 ppid=1 pid=1836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:51.752000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:51.754168 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:51.761521 systemd-logind[1577]: New session 10 of user core. Jan 14 13:39:51.771095 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:39:51.774000 audit[1836]: USER_START pid=1836 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.777000 audit[1840]: CRED_ACQ pid=1840 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:39:51.791000 audit[1841]: USER_ACCT pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.792330 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:39:51.791000 audit[1841]: CRED_REFR pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:51.792883 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:39:51.792000 audit[1841]: USER_START pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:39:52.374202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:39:52.377023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:39:53.009102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:39:53.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:53.024228 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:39:53.483096 kubelet[1869]: E0114 13:39:53.482993 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:39:53.489957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:39:53.490271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:39:53.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:39:53.491204 systemd[1]: kubelet.service: Consumed 984ms CPU time, 110.7M memory peak. Jan 14 13:39:54.257040 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:39:54.294487 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:39:55.954507 dockerd[1879]: time="2026-01-14T13:39:55.954221206Z" level=info msg="Starting up" Jan 14 13:39:55.956587 dockerd[1879]: time="2026-01-14T13:39:55.955920910Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 13:39:56.274263 dockerd[1879]: time="2026-01-14T13:39:56.273152583Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 13:39:56.448829 dockerd[1879]: time="2026-01-14T13:39:56.448661116Z" level=info msg="Loading containers: start." Jan 14 13:39:56.464779 kernel: Initializing XFRM netlink socket Jan 14 13:39:56.578000 audit[1933]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.582637 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 13:39:56.582809 kernel: audit: type=1325 audit(1768397996.578:235): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.578000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc7de6a30 a2=0 a3=0 items=0 ppid=1879 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.602846 kernel: audit: type=1300 audit(1768397996.578:235): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc7de6a30 a2=0 a3=0 items=0 ppid=1879 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.602954 kernel: audit: type=1327 audit(1768397996.578:235): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:39:56.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:39:56.583000 audit[1935]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.614651 kernel: audit: type=1325 audit(1768397996.583:236): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.614871 kernel: audit: type=1300 audit(1768397996.583:236): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedb138c60 a2=0 a3=0 items=0 ppid=1879 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.583000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedb138c60 a2=0 a3=0 items=0 ppid=1879 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.903316 kernel: audit: type=1327 audit(1768397996.583:236): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:39:56.924891 kernel: audit: type=1325 audit(1768397996.588:237): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.950400 kernel: audit: type=1300 audit(1768397996.588:237): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfebb25f0 a2=0 a3=0 items=0 ppid=1879 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:39:56.588000 audit[1937]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.588000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfebb25f0 a2=0 a3=0 items=0 ppid=1879 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:39:56.592000 audit[1939]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.986552 kernel: audit: type=1327 audit(1768397996.588:237): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:39:56.986823 kernel: audit: type=1325 audit(1768397996.592:238): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.592000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa8ad2800 a2=0 a3=0 items=0 ppid=1879 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:39:56.597000 audit[1941]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.597000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4b93f900 a2=0 a3=0 items=0 ppid=1879 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:39:56.602000 audit[1943]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.602000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd49731580 a2=0 a3=0 items=0 ppid=1879 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:39:56.607000 audit[1945]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.607000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc65342b0 a2=0 a3=0 items=0 ppid=1879 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.607000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:39:56.613000 audit[1947]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:56.613000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe03219120 a2=0 a3=0 items=0 ppid=1879 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:56.613000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:39:57.028000 audit[1950]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.028000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc87401e30 a2=0 a3=0 items=0 ppid=1879 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 13:39:57.052000 audit[1952]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.052000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff79d489c0 a2=0 a3=0 items=0 ppid=1879 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:39:57.058000 audit[1954]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.058000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc7766a780 a2=0 a3=0 items=0 ppid=1879 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:39:57.064000 audit[1956]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.064000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd77164440 a2=0 a3=0 items=0 ppid=1879 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:39:57.071000 audit[1958]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.071000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc1d719f50 a2=0 a3=0 items=0 ppid=1879 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:39:57.181000 audit[1988]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.181000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe9e5474e0 a2=0 a3=0 items=0 ppid=1879 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:39:57.186000 audit[1990]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.186000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe77275e70 a2=0 a3=0 items=0 ppid=1879 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:39:57.191000 audit[1992]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.191000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3f4b3ba0 a2=0 a3=0 items=0 ppid=1879 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:39:57.196000 audit[1994]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.196000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfda16da0 a2=0 a3=0 items=0 ppid=1879 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:39:57.200000 audit[1996]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.200000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf66e05a0 a2=0 a3=0 items=0 ppid=1879 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:39:57.206000 audit[1998]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.206000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcfe02fad0 a2=0 a3=0 items=0 ppid=1879 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:39:57.210000 audit[2000]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.210000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe2c26ea0 a2=0 a3=0 items=0 ppid=1879 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:39:57.215000 audit[2002]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.215000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe65e447c0 a2=0 a3=0 items=0 ppid=1879 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:39:57.221000 audit[2004]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.221000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcf10d19d0 a2=0 a3=0 items=0 ppid=1879 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.221000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 13:39:57.226000 audit[2006]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.226000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd17841470 a2=0 a3=0 items=0 ppid=1879 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:39:57.231000 audit[2008]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.231000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe1a6def20 a2=0 a3=0 items=0 ppid=1879 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:39:57.239000 audit[2010]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.239000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc9baaaf10 a2=0 a3=0 items=0 ppid=1879 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:39:57.262000 audit[2012]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.262000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe513c2cf0 a2=0 a3=0 items=0 ppid=1879 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:39:57.276000 audit[2017]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.276000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5d379610 a2=0 a3=0 items=0 ppid=1879 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:39:57.280000 audit[2019]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.280000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc184a9910 a2=0 a3=0 items=0 ppid=1879 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:39:57.284000 audit[2021]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.284000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe8fb53520 a2=0 a3=0 items=0 ppid=1879 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.284000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:39:57.289000 audit[2023]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.289000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe890ff4c0 a2=0 a3=0 items=0 ppid=1879 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:39:57.293000 audit[2025]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.293000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda1f513c0 a2=0 a3=0 items=0 ppid=1879 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:39:57.298000 audit[2027]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:39:57.298000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeb968ef40 a2=0 a3=0 items=0 ppid=1879 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:39:57.325000 audit[2031]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.325000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc059d4f80 a2=0 a3=0 items=0 ppid=1879 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 13:39:57.335000 audit[2033]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.335000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc35acb8d0 a2=0 a3=0 items=0 ppid=1879 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.335000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 13:39:57.363000 audit[2041]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.363000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc48fca730 a2=0 a3=0 items=0 ppid=1879 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 13:39:57.381000 audit[2047]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.381000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffeda8c810 a2=0 a3=0 items=0 ppid=1879 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 13:39:57.386000 audit[2049]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.386000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffe34c5ae0 a2=0 a3=0 items=0 ppid=1879 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 13:39:57.391000 audit[2051]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.391000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd99bda230 a2=0 a3=0 items=0 ppid=1879 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 13:39:57.397000 audit[2053]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.397000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff099f9d00 a2=0 a3=0 items=0 ppid=1879 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.397000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:39:57.403000 audit[2055]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:39:57.403000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef7054340 a2=0 a3=0 items=0 ppid=1879 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:57.403000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 13:39:57.404620 systemd-networkd[1510]: docker0: Link UP Jan 14 13:39:57.412460 dockerd[1879]: time="2026-01-14T13:39:57.412344986Z" level=info msg="Loading containers: done." Jan 14 13:39:57.450911 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2684811249-merged.mount: Deactivated successfully. Jan 14 13:39:57.458812 dockerd[1879]: time="2026-01-14T13:39:57.458582457Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:39:57.459026 dockerd[1879]: time="2026-01-14T13:39:57.458954202Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 13:39:57.459318 dockerd[1879]: time="2026-01-14T13:39:57.459272867Z" level=info msg="Initializing buildkit" Jan 14 13:39:57.508832 dockerd[1879]: time="2026-01-14T13:39:57.508775956Z" level=info msg="Completed buildkit initialization" Jan 14 13:39:57.516934 dockerd[1879]: time="2026-01-14T13:39:57.516838827Z" level=info msg="Daemon has completed initialization" Jan 14 13:39:57.517345 dockerd[1879]: time="2026-01-14T13:39:57.517103113Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:39:57.518103 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:39:57.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:59.218260 containerd[1601]: time="2026-01-14T13:39:59.217357044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 13:39:59.930627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042077157.mount: Deactivated successfully. Jan 14 13:40:02.608281 containerd[1601]: time="2026-01-14T13:40:02.607858182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:02.610904 containerd[1601]: time="2026-01-14T13:40:02.608835240Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27533830" Jan 14 13:40:02.610904 containerd[1601]: time="2026-01-14T13:40:02.610650234Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:02.616244 containerd[1601]: time="2026-01-14T13:40:02.616127152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:02.617860 containerd[1601]: time="2026-01-14T13:40:02.617774848Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.399873978s" Jan 14 13:40:02.617943 containerd[1601]: time="2026-01-14T13:40:02.617928796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 13:40:02.621492 containerd[1601]: time="2026-01-14T13:40:02.621415434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 13:40:03.770895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:40:03.774544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:04.372041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:04.381790 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 14 13:40:04.381945 kernel: audit: type=1130 audit(1768398004.371:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:04.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:04.392135 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:40:04.697121 kubelet[2165]: E0114 13:40:04.695585 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:40:04.701335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:40:04.701651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:40:04.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:40:04.711039 systemd[1]: kubelet.service: Consumed 814ms CPU time, 110M memory peak. Jan 14 13:40:04.711767 kernel: audit: type=1131 audit(1768398004.701:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:40:06.084861 containerd[1601]: time="2026-01-14T13:40:06.084518972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:06.086549 containerd[1601]: time="2026-01-14T13:40:06.086258079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 13:40:06.088076 containerd[1601]: time="2026-01-14T13:40:06.088016440Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:06.092387 containerd[1601]: time="2026-01-14T13:40:06.092295779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:06.094581 containerd[1601]: time="2026-01-14T13:40:06.094493056Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.473018311s" Jan 14 13:40:06.094725 containerd[1601]: time="2026-01-14T13:40:06.094614153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 13:40:06.099434 containerd[1601]: time="2026-01-14T13:40:06.099381250Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 13:40:08.659413 containerd[1601]: time="2026-01-14T13:40:08.659254343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:08.661489 containerd[1601]: time="2026-01-14T13:40:08.661340922Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19400173" Jan 14 13:40:08.663072 containerd[1601]: time="2026-01-14T13:40:08.662952891Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:08.668881 containerd[1601]: time="2026-01-14T13:40:08.668764568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:08.671218 containerd[1601]: time="2026-01-14T13:40:08.671136344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.571669723s" Jan 14 13:40:08.671340 containerd[1601]: time="2026-01-14T13:40:08.671228054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 13:40:08.673334 containerd[1601]: time="2026-01-14T13:40:08.673212640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 13:40:10.676584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979423318.mount: Deactivated successfully. Jan 14 13:40:11.082371 containerd[1601]: time="2026-01-14T13:40:11.082104360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:11.083465 containerd[1601]: time="2026-01-14T13:40:11.083405662Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 13:40:11.085014 containerd[1601]: time="2026-01-14T13:40:11.084878046Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:11.087552 containerd[1601]: time="2026-01-14T13:40:11.087442434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:11.088463 containerd[1601]: time="2026-01-14T13:40:11.088386946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.415099527s" Jan 14 13:40:11.088542 containerd[1601]: time="2026-01-14T13:40:11.088501941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 13:40:11.090610 containerd[1601]: time="2026-01-14T13:40:11.090447419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 13:40:11.641289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577230655.mount: Deactivated successfully. Jan 14 13:40:12.572509 containerd[1601]: time="2026-01-14T13:40:12.572404669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:12.573603 containerd[1601]: time="2026-01-14T13:40:12.573544617Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=20316" Jan 14 13:40:12.575273 containerd[1601]: time="2026-01-14T13:40:12.575203632Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:12.578872 containerd[1601]: time="2026-01-14T13:40:12.578807734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:12.580311 containerd[1601]: time="2026-01-14T13:40:12.580245195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.48976177s" Jan 14 13:40:12.580311 containerd[1601]: time="2026-01-14T13:40:12.580308714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 13:40:12.581881 containerd[1601]: time="2026-01-14T13:40:12.581801867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 13:40:13.007600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836291418.mount: Deactivated successfully. Jan 14 13:40:13.016907 containerd[1601]: time="2026-01-14T13:40:13.016790061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:40:13.018253 containerd[1601]: time="2026-01-14T13:40:13.018120508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 13:40:13.019427 containerd[1601]: time="2026-01-14T13:40:13.019367369Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:40:13.022637 containerd[1601]: time="2026-01-14T13:40:13.022535277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:40:13.023540 containerd[1601]: time="2026-01-14T13:40:13.023485272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 441.640646ms" Jan 14 13:40:13.023540 containerd[1601]: time="2026-01-14T13:40:13.023523493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 13:40:13.024231 containerd[1601]: time="2026-01-14T13:40:13.024192879Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 13:40:13.478389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931526593.mount: Deactivated successfully. Jan 14 13:40:14.876490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:40:14.881091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:15.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:15.274024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:15.280787 kernel: audit: type=1130 audit(1768398015.273:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:15.580893 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:40:15.686599 kubelet[2306]: E0114 13:40:15.686503 2306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:40:15.692534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:40:15.692932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:40:15.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:40:15.694025 systemd[1]: kubelet.service: Consumed 398ms CPU time, 108.9M memory peak. Jan 14 13:40:15.702759 kernel: audit: type=1131 audit(1768398015.693:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:40:18.061923 containerd[1601]: time="2026-01-14T13:40:18.061389088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:18.064308 containerd[1601]: time="2026-01-14T13:40:18.063331632Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 14 13:40:18.064442 containerd[1601]: time="2026-01-14T13:40:18.064369705Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:18.067046 containerd[1601]: time="2026-01-14T13:40:18.066979910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:18.068630 containerd[1601]: time="2026-01-14T13:40:18.068550494Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.044313282s" Jan 14 13:40:18.068794 containerd[1601]: time="2026-01-14T13:40:18.068745448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 13:40:20.586604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:20.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:20.586941 systemd[1]: kubelet.service: Consumed 398ms CPU time, 108.9M memory peak. Jan 14 13:40:20.589801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:20.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:20.600585 kernel: audit: type=1130 audit(1768398020.586:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:20.600755 kernel: audit: type=1131 audit(1768398020.586:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:20.624291 systemd[1]: Reload requested from client PID 2347 ('systemctl') (unit session-10.scope)... Jan 14 13:40:20.624355 systemd[1]: Reloading... Jan 14 13:40:21.014743 zram_generator::config[2392]: No configuration found. Jan 14 13:40:21.264121 systemd[1]: Reloading finished in 639 ms. Jan 14 13:40:21.296000 audit: BPF prog-id=61 op=LOAD Jan 14 13:40:21.302314 kernel: audit: type=1334 audit(1768398021.296:282): prog-id=61 op=LOAD Jan 14 13:40:21.302387 kernel: audit: type=1334 audit(1768398021.297:283): prog-id=58 op=UNLOAD Jan 14 13:40:21.297000 audit: BPF prog-id=58 op=UNLOAD Jan 14 13:40:21.297000 audit: BPF prog-id=62 op=LOAD Jan 14 13:40:21.304819 kernel: audit: type=1334 audit(1768398021.297:284): prog-id=62 op=LOAD Jan 14 13:40:21.304900 kernel: audit: type=1334 audit(1768398021.297:285): prog-id=63 op=LOAD Jan 14 13:40:21.297000 audit: BPF prog-id=63 op=LOAD Jan 14 13:40:21.306792 kernel: audit: type=1334 audit(1768398021.297:286): prog-id=59 op=UNLOAD Jan 14 13:40:21.297000 audit: BPF prog-id=59 op=UNLOAD Jan 14 13:40:21.308891 kernel: audit: type=1334 audit(1768398021.297:287): prog-id=60 op=UNLOAD Jan 14 13:40:21.297000 audit: BPF prog-id=60 op=UNLOAD Jan 14 13:40:21.310970 kernel: audit: type=1334 audit(1768398021.298:288): prog-id=64 op=LOAD Jan 14 13:40:21.298000 audit: BPF prog-id=64 op=LOAD Jan 14 13:40:21.312959 kernel: audit: type=1334 audit(1768398021.298:289): prog-id=47 op=UNLOAD Jan 14 13:40:21.298000 audit: BPF prog-id=47 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=65 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=48 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=66 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=67 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=49 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=50 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=68 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=51 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=69 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=70 op=LOAD Jan 14 13:40:21.299000 audit: BPF prog-id=52 op=UNLOAD Jan 14 13:40:21.299000 audit: BPF prog-id=53 op=UNLOAD Jan 14 13:40:21.302000 audit: BPF prog-id=71 op=LOAD Jan 14 13:40:21.302000 audit: BPF prog-id=57 op=UNLOAD Jan 14 13:40:21.325000 audit: BPF prog-id=72 op=LOAD Jan 14 13:40:21.325000 audit: BPF prog-id=56 op=UNLOAD Jan 14 13:40:21.326000 audit: BPF prog-id=73 op=LOAD Jan 14 13:40:21.326000 audit: BPF prog-id=44 op=UNLOAD Jan 14 13:40:21.326000 audit: BPF prog-id=74 op=LOAD Jan 14 13:40:21.326000 audit: BPF prog-id=75 op=LOAD Jan 14 13:40:21.326000 audit: BPF prog-id=45 op=UNLOAD Jan 14 13:40:21.326000 audit: BPF prog-id=46 op=UNLOAD Jan 14 13:40:21.328000 audit: BPF prog-id=76 op=LOAD Jan 14 13:40:21.328000 audit: BPF prog-id=41 op=UNLOAD Jan 14 13:40:21.328000 audit: BPF prog-id=77 op=LOAD Jan 14 13:40:21.328000 audit: BPF prog-id=78 op=LOAD Jan 14 13:40:21.328000 audit: BPF prog-id=42 op=UNLOAD Jan 14 13:40:21.328000 audit: BPF prog-id=43 op=UNLOAD Jan 14 13:40:21.329000 audit: BPF prog-id=79 op=LOAD Jan 14 13:40:21.329000 audit: BPF prog-id=80 op=LOAD Jan 14 13:40:21.329000 audit: BPF prog-id=54 op=UNLOAD Jan 14 13:40:21.329000 audit: BPF prog-id=55 op=UNLOAD Jan 14 13:40:21.354417 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:40:21.354563 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:40:21.355103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:21.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:40:21.355255 systemd[1]: kubelet.service: Consumed 504ms CPU time, 98.5M memory peak. Jan 14 13:40:21.357419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:21.564356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:21.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:21.578228 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:40:21.744958 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:40:21.744958 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:40:21.744958 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:40:21.744958 kubelet[2440]: I0114 13:40:21.744945 2440 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:40:21.815884 update_engine[1578]: I20260114 13:40:21.812909 1578 update_attempter.cc:509] Updating boot flags... Jan 14 13:40:22.563434 kubelet[2440]: I0114 13:40:22.563239 2440 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 13:40:22.563434 kubelet[2440]: I0114 13:40:22.563362 2440 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:40:22.564198 kubelet[2440]: I0114 13:40:22.564116 2440 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 13:40:22.600256 kubelet[2440]: E0114 13:40:22.600092 2440 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:22.611753 kubelet[2440]: I0114 13:40:22.610139 2440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:40:22.668425 kubelet[2440]: I0114 13:40:22.668337 2440 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:40:22.681253 kubelet[2440]: I0114 13:40:22.681221 2440 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:40:22.681954 kubelet[2440]: I0114 13:40:22.681918 2440 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:40:22.682331 kubelet[2440]: I0114 13:40:22.681958 2440 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:40:22.682458 kubelet[2440]: I0114 13:40:22.682383 2440 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:40:22.682458 kubelet[2440]: I0114 13:40:22.682394 2440 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 13:40:22.682621 kubelet[2440]: I0114 13:40:22.682608 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:40:22.685898 kubelet[2440]: I0114 13:40:22.685810 2440 kubelet.go:446] "Attempting to sync node with API server" Jan 14 13:40:22.685898 kubelet[2440]: I0114 13:40:22.685907 2440 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:40:22.686128 kubelet[2440]: I0114 13:40:22.685977 2440 kubelet.go:352] "Adding apiserver pod source" Jan 14 13:40:22.687778 kubelet[2440]: I0114 13:40:22.686860 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:40:22.694507 kubelet[2440]: W0114 13:40:22.694416 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:22.694592 kubelet[2440]: E0114 13:40:22.694513 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:22.694878 kubelet[2440]: W0114 13:40:22.694824 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:22.694933 kubelet[2440]: E0114 13:40:22.694881 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:22.695963 kubelet[2440]: I0114 13:40:22.695894 2440 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:40:22.697106 kubelet[2440]: I0114 13:40:22.697019 2440 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:40:22.697308 kubelet[2440]: W0114 13:40:22.697218 2440 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:40:22.701434 kubelet[2440]: I0114 13:40:22.701372 2440 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:40:22.702410 kubelet[2440]: I0114 13:40:22.701520 2440 server.go:1287] "Started kubelet" Jan 14 13:40:22.702410 kubelet[2440]: I0114 13:40:22.701908 2440 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:40:22.703814 kubelet[2440]: I0114 13:40:22.703747 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:40:22.704244 kubelet[2440]: I0114 13:40:22.704165 2440 server.go:479] "Adding debug handlers to kubelet server" Jan 14 13:40:22.706635 kubelet[2440]: I0114 13:40:22.706495 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:40:22.707005 kubelet[2440]: I0114 13:40:22.706927 2440 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:40:22.707150 kubelet[2440]: E0114 13:40:22.705894 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a9ca1dc9f2034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 13:40:22.701424692 +0000 UTC m=+1.109252287,LastTimestamp:2026-01-14 13:40:22.701424692 +0000 UTC m=+1.109252287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 13:40:22.707518 kubelet[2440]: I0114 13:40:22.707489 2440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:40:22.710075 kubelet[2440]: E0114 13:40:22.710044 2440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 13:40:22.710166 kubelet[2440]: I0114 13:40:22.710119 2440 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:40:22.710577 kubelet[2440]: I0114 13:40:22.710488 2440 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:40:22.710643 kubelet[2440]: I0114 13:40:22.710625 2440 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:40:22.711560 kubelet[2440]: W0114 13:40:22.711480 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:22.711639 kubelet[2440]: E0114 13:40:22.711568 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:22.711000 audit[2471]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.711000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff2bd5d370 a2=0 a3=0 items=0 ppid=2440 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:40:22.712850 kubelet[2440]: E0114 13:40:22.712447 2440 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:40:22.712850 kubelet[2440]: I0114 13:40:22.712608 2440 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:40:22.712850 kubelet[2440]: I0114 13:40:22.712807 2440 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:40:22.713088 kubelet[2440]: E0114 13:40:22.713019 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Jan 14 13:40:22.713000 audit[2472]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.713000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc637907a0 a2=0 a3=0 items=0 ppid=2440 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:40:22.715204 kubelet[2440]: I0114 13:40:22.715139 2440 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:40:22.718000 audit[2474]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.718000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd7a8d4fc0 a2=0 a3=0 items=0 ppid=2440 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:40:22.722000 audit[2476]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.722000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4dd88a30 a2=0 a3=0 items=0 ppid=2440 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:40:22.734000 audit[2479]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.734000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc8230c550 a2=0 a3=0 items=0 ppid=2440 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 13:40:22.737114 kubelet[2440]: I0114 13:40:22.737031 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:40:22.741000 audit[2484]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:22.741000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff31d714d0 a2=0 a3=0 items=0 ppid=2440 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:40:22.743130 kubelet[2440]: I0114 13:40:22.743011 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:40:22.743539 kubelet[2440]: I0114 13:40:22.743404 2440 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 13:40:22.744081 kubelet[2440]: I0114 13:40:22.743998 2440 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:40:22.744081 kubelet[2440]: I0114 13:40:22.744070 2440 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 13:40:22.751352 kubelet[2440]: E0114 13:40:22.748392 2440 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:40:22.759034 kubelet[2440]: W0114 13:40:22.758872 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:22.759034 kubelet[2440]: E0114 13:40:22.758922 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:22.761000 audit[2486]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:22.761000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed3411e80 a2=0 a3=0 items=0 ppid=2440 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:40:22.764000 audit[2489]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:22.764000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0588bfa0 a2=0 a3=0 items=0 ppid=2440 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:40:22.765000 audit[2485]: NETFILTER_CFG table=mangle:50 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.765000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a8b5e90 a2=0 a3=0 items=0 ppid=2440 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:40:22.767000 audit[2491]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.767000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceaeff9c0 a2=0 a3=0 items=0 ppid=2440 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:40:22.769000 audit[2490]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:22.769000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee86f8270 a2=0 a3=0 items=0 ppid=2440 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:40:22.770000 audit[2492]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:22.770000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6e2a03d0 a2=0 a3=0 items=0 ppid=2440 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:22.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:40:22.810495 kubelet[2440]: E0114 13:40:22.810431 2440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 13:40:22.815932 kubelet[2440]: I0114 13:40:22.815824 2440 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:40:22.815932 kubelet[2440]: I0114 13:40:22.815860 2440 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:40:22.815932 kubelet[2440]: I0114 13:40:22.815905 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:40:22.819092 kubelet[2440]: I0114 13:40:22.819013 2440 policy_none.go:49] "None policy: Start" Jan 14 13:40:22.819092 kubelet[2440]: I0114 13:40:22.819082 2440 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:40:22.819209 kubelet[2440]: I0114 13:40:22.819106 2440 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:40:22.832438 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:40:22.850360 kubelet[2440]: E0114 13:40:22.850262 2440 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:40:22.858850 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:40:22.865601 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:40:22.881752 kubelet[2440]: I0114 13:40:22.880436 2440 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:40:22.882136 kubelet[2440]: I0114 13:40:22.882106 2440 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:40:22.882392 kubelet[2440]: I0114 13:40:22.882262 2440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:40:22.882825 kubelet[2440]: I0114 13:40:22.882773 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:40:22.884081 kubelet[2440]: E0114 13:40:22.883872 2440 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:40:22.884081 kubelet[2440]: E0114 13:40:22.884046 2440 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 13:40:22.915033 kubelet[2440]: E0114 13:40:22.914903 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Jan 14 13:40:22.986400 kubelet[2440]: I0114 13:40:22.985847 2440 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:40:22.986537 kubelet[2440]: E0114 13:40:22.986513 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 14 13:40:23.064234 systemd[1]: Created slice kubepods-burstable-podb04009aecd533090299b249e42c6d46c.slice - libcontainer container kubepods-burstable-podb04009aecd533090299b249e42c6d46c.slice. Jan 14 13:40:23.078428 kubelet[2440]: E0114 13:40:23.078245 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:23.084059 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 14 13:40:23.087185 kubelet[2440]: E0114 13:40:23.087151 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:23.089154 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 14 13:40:23.092413 kubelet[2440]: E0114 13:40:23.092324 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:23.112399 kubelet[2440]: I0114 13:40:23.112253 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:23.112399 kubelet[2440]: I0114 13:40:23.112318 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:23.112399 kubelet[2440]: I0114 13:40:23.112338 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:23.112399 kubelet[2440]: I0114 13:40:23.112389 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:23.112399 kubelet[2440]: I0114 13:40:23.112408 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:23.112898 kubelet[2440]: I0114 13:40:23.112423 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:23.112898 kubelet[2440]: I0114 13:40:23.112437 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:23.112898 kubelet[2440]: I0114 13:40:23.112451 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:23.112898 kubelet[2440]: I0114 13:40:23.112465 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:23.191228 kubelet[2440]: I0114 13:40:23.191099 2440 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:40:23.191679 kubelet[2440]: E0114 13:40:23.191637 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 14 13:40:23.316364 kubelet[2440]: E0114 13:40:23.316284 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Jan 14 13:40:23.380113 kubelet[2440]: E0114 13:40:23.380026 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.381658 containerd[1601]: time="2026-01-14T13:40:23.381464511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b04009aecd533090299b249e42c6d46c,Namespace:kube-system,Attempt:0,}" Jan 14 13:40:23.388880 kubelet[2440]: E0114 13:40:23.388836 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.389548 containerd[1601]: time="2026-01-14T13:40:23.389495282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 14 13:40:23.393075 kubelet[2440]: E0114 13:40:23.393032 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.393443 containerd[1601]: time="2026-01-14T13:40:23.393404151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 14 13:40:23.520570 containerd[1601]: time="2026-01-14T13:40:23.520340014Z" level=info msg="connecting to shim d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605" address="unix:///run/containerd/s/dd72e4187f6fa34b108147ed9e37e4418b4f106a5321ed88ba5c40b509e5e12d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:23.526248 containerd[1601]: time="2026-01-14T13:40:23.526211202Z" level=info msg="connecting to shim 50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd" address="unix:///run/containerd/s/590c9f24feb31b8089d8674cc15cc6ae11b3c3426dc83a57585cff4f65986cb6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:23.527780 containerd[1601]: time="2026-01-14T13:40:23.527643282Z" level=info msg="connecting to shim 77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45" address="unix:///run/containerd/s/859ef071d1697a2fa4541c0c23f6dc7040e3afdb640b776248734a65eb92c048" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:23.553291 kubelet[2440]: W0114 13:40:23.553213 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:23.553443 kubelet[2440]: E0114 13:40:23.553306 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:23.573917 systemd[1]: Started cri-containerd-d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605.scope - libcontainer container d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605. Jan 14 13:40:23.578526 systemd[1]: Started cri-containerd-77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45.scope - libcontainer container 77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45. Jan 14 13:40:23.584552 systemd[1]: Started cri-containerd-50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd.scope - libcontainer container 50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd. Jan 14 13:40:23.597476 kubelet[2440]: I0114 13:40:23.597405 2440 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:40:23.598404 kubelet[2440]: E0114 13:40:23.598035 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 14 13:40:23.599000 audit: BPF prog-id=81 op=LOAD Jan 14 13:40:23.599000 audit: BPF prog-id=82 op=LOAD Jan 14 13:40:23.599000 audit[2551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=82 op=UNLOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=83 op=LOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=84 op=LOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=84 op=UNLOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=83 op=UNLOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.600000 audit: BPF prog-id=85 op=LOAD Jan 14 13:40:23.600000 audit[2551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2522 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643436373534326435393564383534633765323433363963303834 Jan 14 13:40:23.602000 audit: BPF prog-id=86 op=LOAD Jan 14 13:40:23.603000 audit: BPF prog-id=87 op=LOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.603000 audit: BPF prog-id=87 op=UNLOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.603000 audit: BPF prog-id=88 op=LOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.603000 audit: BPF prog-id=89 op=LOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.603000 audit: BPF prog-id=89 op=UNLOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.603000 audit: BPF prog-id=88 op=UNLOAD Jan 14 13:40:23.603000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.604000 audit: BPF prog-id=90 op=LOAD Jan 14 13:40:23.604000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2504 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353863396131383831336464663639643165323161646364366566 Jan 14 13:40:23.607000 audit: BPF prog-id=91 op=LOAD Jan 14 13:40:23.608000 audit: BPF prog-id=92 op=LOAD Jan 14 13:40:23.608000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.608000 audit: BPF prog-id=92 op=UNLOAD Jan 14 13:40:23.608000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.608000 audit: BPF prog-id=93 op=LOAD Jan 14 13:40:23.608000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.609000 audit: BPF prog-id=94 op=LOAD Jan 14 13:40:23.609000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.609000 audit: BPF prog-id=94 op=UNLOAD Jan 14 13:40:23.609000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.609000 audit: BPF prog-id=93 op=UNLOAD Jan 14 13:40:23.609000 audit[2571]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.610000 audit: BPF prog-id=95 op=LOAD Jan 14 13:40:23.610000 audit[2571]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2521 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530633530623339346234633835363633376339313963333233323739 Jan 14 13:40:23.637487 kubelet[2440]: W0114 13:40:23.637130 2440 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 14 13:40:23.637487 kubelet[2440]: E0114 13:40:23.637417 2440 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:40:23.647453 containerd[1601]: time="2026-01-14T13:40:23.647375183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45\"" Jan 14 13:40:23.653765 kubelet[2440]: E0114 13:40:23.652899 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.658519 containerd[1601]: time="2026-01-14T13:40:23.658461552Z" level=info msg="CreateContainer within sandbox \"77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:40:23.663365 containerd[1601]: time="2026-01-14T13:40:23.663321601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b04009aecd533090299b249e42c6d46c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605\"" Jan 14 13:40:23.664379 kubelet[2440]: E0114 13:40:23.664361 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.667817 containerd[1601]: time="2026-01-14T13:40:23.666893636Z" level=info msg="CreateContainer within sandbox \"d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:40:23.676749 containerd[1601]: time="2026-01-14T13:40:23.676622742Z" level=info msg="Container 0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:23.682134 containerd[1601]: time="2026-01-14T13:40:23.682025444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd\"" Jan 14 13:40:23.683052 kubelet[2440]: E0114 13:40:23.682984 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:23.684738 containerd[1601]: time="2026-01-14T13:40:23.684639646Z" level=info msg="CreateContainer within sandbox \"50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:40:23.689456 containerd[1601]: time="2026-01-14T13:40:23.689404920Z" level=info msg="CreateContainer within sandbox \"77d467542d595d854c7e24369c084378d1b64f06d3240b2cd5499cb7107e3f45\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7\"" Jan 14 13:40:23.690096 containerd[1601]: time="2026-01-14T13:40:23.690052267Z" level=info msg="StartContainer for \"0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7\"" Jan 14 13:40:23.691510 containerd[1601]: time="2026-01-14T13:40:23.691450054Z" level=info msg="connecting to shim 0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7" address="unix:///run/containerd/s/859ef071d1697a2fa4541c0c23f6dc7040e3afdb640b776248734a65eb92c048" protocol=ttrpc version=3 Jan 14 13:40:23.695772 containerd[1601]: time="2026-01-14T13:40:23.694964068Z" level=info msg="Container 68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:23.701615 containerd[1601]: time="2026-01-14T13:40:23.701572937Z" level=info msg="Container 108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:23.710545 containerd[1601]: time="2026-01-14T13:40:23.710516792Z" level=info msg="CreateContainer within sandbox \"d358c9a18813ddf69d1e21adcd6ef169eff9aad286b72485cec2a9ac40926605\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423\"" Jan 14 13:40:23.712530 containerd[1601]: time="2026-01-14T13:40:23.712507934Z" level=info msg="StartContainer for \"68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423\"" Jan 14 13:40:23.714735 containerd[1601]: time="2026-01-14T13:40:23.714582703Z" level=info msg="connecting to shim 68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423" address="unix:///run/containerd/s/dd72e4187f6fa34b108147ed9e37e4418b4f106a5321ed88ba5c40b509e5e12d" protocol=ttrpc version=3 Jan 14 13:40:23.715006 containerd[1601]: time="2026-01-14T13:40:23.714849850Z" level=info msg="CreateContainer within sandbox \"50c50b394b4c856637c919c323279a60afa90a788eb971a92ef90dc5069866cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9\"" Jan 14 13:40:23.715391 containerd[1601]: time="2026-01-14T13:40:23.715289576Z" level=info msg="StartContainer for \"108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9\"" Jan 14 13:40:23.716760 containerd[1601]: time="2026-01-14T13:40:23.716631978Z" level=info msg="connecting to shim 108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9" address="unix:///run/containerd/s/590c9f24feb31b8089d8674cc15cc6ae11b3c3426dc83a57585cff4f65986cb6" protocol=ttrpc version=3 Jan 14 13:40:23.723955 systemd[1]: Started cri-containerd-0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7.scope - libcontainer container 0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7. Jan 14 13:40:23.754255 systemd[1]: Started cri-containerd-108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9.scope - libcontainer container 108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9. Jan 14 13:40:23.756177 systemd[1]: Started cri-containerd-68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423.scope - libcontainer container 68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423. Jan 14 13:40:23.768000 audit: BPF prog-id=96 op=LOAD Jan 14 13:40:23.769000 audit: BPF prog-id=97 op=LOAD Jan 14 13:40:23.769000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.770000 audit: BPF prog-id=97 op=UNLOAD Jan 14 13:40:23.770000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.771000 audit: BPF prog-id=98 op=LOAD Jan 14 13:40:23.771000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.771000 audit: BPF prog-id=99 op=LOAD Jan 14 13:40:23.771000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.773000 audit: BPF prog-id=99 op=UNLOAD Jan 14 13:40:23.773000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.774000 audit: BPF prog-id=98 op=UNLOAD Jan 14 13:40:23.774000 audit[2634]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.775000 audit: BPF prog-id=100 op=LOAD Jan 14 13:40:23.775000 audit[2634]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2522 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065383961323130613263343338356339323337623136336336613562 Jan 14 13:40:23.784000 audit: BPF prog-id=101 op=LOAD Jan 14 13:40:23.786000 audit: BPF prog-id=102 op=LOAD Jan 14 13:40:23.786000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.786000 audit: BPF prog-id=102 op=UNLOAD Jan 14 13:40:23.786000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.788000 audit: BPF prog-id=103 op=LOAD Jan 14 13:40:23.788000 audit: BPF prog-id=104 op=LOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.788000 audit: BPF prog-id=104 op=UNLOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.788000 audit: BPF prog-id=105 op=LOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.788000 audit: BPF prog-id=106 op=LOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.788000 audit: BPF prog-id=106 op=UNLOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.788000 audit: BPF prog-id=105 op=UNLOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.789000 audit: BPF prog-id=107 op=LOAD Jan 14 13:40:23.789000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.789000 audit: BPF prog-id=108 op=LOAD Jan 14 13:40:23.789000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.789000 audit: BPF prog-id=108 op=UNLOAD Jan 14 13:40:23.789000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.789000 audit: BPF prog-id=107 op=UNLOAD Jan 14 13:40:23.789000 audit[2647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.789000 audit: BPF prog-id=109 op=LOAD Jan 14 13:40:23.789000 audit[2647]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2521 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130386531623634353234336566623631366465353666623861363462 Jan 14 13:40:23.788000 audit: BPF prog-id=110 op=LOAD Jan 14 13:40:23.788000 audit[2646]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2504 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:23.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638633365393561663232666634653434333236366330623335336664 Jan 14 13:40:23.854736 containerd[1601]: time="2026-01-14T13:40:23.854049132Z" level=info msg="StartContainer for \"68c3e95af22ff4e443266c0b353fdbf2734c405dad0aff148145e7b6d011d423\" returns successfully" Jan 14 13:40:23.866832 containerd[1601]: time="2026-01-14T13:40:23.866646544Z" level=info msg="StartContainer for \"108e1b645243efb616de56fb8a64b67028924cd52b26c60d082c344b3b3281f9\" returns successfully" Jan 14 13:40:23.872836 containerd[1601]: time="2026-01-14T13:40:23.872797002Z" level=info msg="StartContainer for \"0e89a210a2c4385c9237b163c6a5b26c62a40945d70a37d3ed202e342b2aafc7\" returns successfully" Jan 14 13:40:24.402081 kubelet[2440]: I0114 13:40:24.402020 2440 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:40:24.926885 kubelet[2440]: E0114 13:40:24.924795 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:24.926885 kubelet[2440]: E0114 13:40:24.925341 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:24.929720 kubelet[2440]: E0114 13:40:24.927767 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:24.929720 kubelet[2440]: E0114 13:40:24.927933 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:24.930285 kubelet[2440]: E0114 13:40:24.930229 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:24.930474 kubelet[2440]: E0114 13:40:24.930420 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:26.966659 kubelet[2440]: E0114 13:40:26.966320 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:26.966659 kubelet[2440]: E0114 13:40:26.966510 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:26.967887 kubelet[2440]: E0114 13:40:26.966931 2440 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:40:26.967887 kubelet[2440]: E0114 13:40:26.966948 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:26.967887 kubelet[2440]: E0114 13:40:26.967122 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:26.967887 kubelet[2440]: E0114 13:40:26.967199 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:27.672322 kubelet[2440]: E0114 13:40:27.672202 2440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 13:40:27.889427 kubelet[2440]: I0114 13:40:27.885896 2440 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 13:40:27.889427 kubelet[2440]: E0114 13:40:27.886082 2440 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 13:40:27.907154 kubelet[2440]: I0114 13:40:27.907074 2440 apiserver.go:52] "Watching apiserver" Jan 14 13:40:27.933594 kubelet[2440]: I0114 13:40:27.926503 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:27.933594 kubelet[2440]: I0114 13:40:27.931351 2440 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:40:27.987499 kubelet[2440]: E0114 13:40:27.986940 2440 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:27.987499 kubelet[2440]: I0114 13:40:27.987186 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:28.021769 kubelet[2440]: I0114 13:40:28.021341 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:28.021769 kubelet[2440]: I0114 13:40:28.021930 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:28.026859 kubelet[2440]: E0114 13:40:28.024115 2440 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:28.026859 kubelet[2440]: I0114 13:40:28.024142 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:28.028929 kubelet[2440]: E0114 13:40:28.028832 2440 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:28.029630 kubelet[2440]: E0114 13:40:28.029209 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:28.029630 kubelet[2440]: E0114 13:40:28.029520 2440 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:28.029630 kubelet[2440]: E0114 13:40:28.029499 2440 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:28.030808 kubelet[2440]: E0114 13:40:28.030256 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:29.024061 kubelet[2440]: I0114 13:40:29.023954 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:29.040172 kubelet[2440]: E0114 13:40:29.037642 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:29.569379 kubelet[2440]: I0114 13:40:29.569229 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:29.684994 kubelet[2440]: E0114 13:40:29.684336 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:30.028498 kubelet[2440]: E0114 13:40:30.027600 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:30.028498 kubelet[2440]: E0114 13:40:30.028126 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:31.198257 systemd[1]: Reload requested from client PID 2739 ('systemctl') (unit session-10.scope)... Jan 14 13:40:31.198385 systemd[1]: Reloading... Jan 14 13:40:31.475806 zram_generator::config[2782]: No configuration found. Jan 14 13:40:31.788198 kubelet[2440]: I0114 13:40:31.787987 2440 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:31.826343 kubelet[2440]: E0114 13:40:31.826142 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:32.203498 kubelet[2440]: E0114 13:40:32.203191 2440 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:32.233625 systemd[1]: Reloading finished in 1034 ms. Jan 14 13:40:32.384904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:32.402070 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:40:32.402894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:32.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:32.408459 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 13:40:32.408878 kernel: audit: type=1131 audit(1768398032.402:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:32.403210 systemd[1]: kubelet.service: Consumed 2.955s CPU time, 131M memory peak. Jan 14 13:40:32.439259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:40:32.471416 kernel: audit: type=1334 audit(1768398032.443:385): prog-id=111 op=LOAD Jan 14 13:40:32.443000 audit: BPF prog-id=111 op=LOAD Jan 14 13:40:32.474000 audit: BPF prog-id=61 op=UNLOAD Jan 14 13:40:32.475000 audit: BPF prog-id=112 op=LOAD Jan 14 13:40:32.483417 kernel: audit: type=1334 audit(1768398032.474:386): prog-id=61 op=UNLOAD Jan 14 13:40:32.483485 kernel: audit: type=1334 audit(1768398032.475:387): prog-id=112 op=LOAD Jan 14 13:40:32.483516 kernel: audit: type=1334 audit(1768398032.475:388): prog-id=113 op=LOAD Jan 14 13:40:32.475000 audit: BPF prog-id=113 op=LOAD Jan 14 13:40:32.475000 audit: BPF prog-id=62 op=UNLOAD Jan 14 13:40:32.490899 kernel: audit: type=1334 audit(1768398032.475:389): prog-id=62 op=UNLOAD Jan 14 13:40:32.475000 audit: BPF prog-id=63 op=UNLOAD Jan 14 13:40:32.497781 kernel: audit: type=1334 audit(1768398032.475:390): prog-id=63 op=UNLOAD Jan 14 13:40:32.477000 audit: BPF prog-id=114 op=LOAD Jan 14 13:40:32.477000 audit: BPF prog-id=71 op=UNLOAD Jan 14 13:40:32.504990 kernel: audit: type=1334 audit(1768398032.477:391): prog-id=114 op=LOAD Jan 14 13:40:32.505065 kernel: audit: type=1334 audit(1768398032.477:392): prog-id=71 op=UNLOAD Jan 14 13:40:32.478000 audit: BPF prog-id=115 op=LOAD Jan 14 13:40:32.478000 audit: BPF prog-id=116 op=LOAD Jan 14 13:40:32.478000 audit: BPF prog-id=79 op=UNLOAD Jan 14 13:40:32.478000 audit: BPF prog-id=80 op=UNLOAD Jan 14 13:40:32.482000 audit: BPF prog-id=117 op=LOAD Jan 14 13:40:32.482000 audit: BPF prog-id=64 op=UNLOAD Jan 14 13:40:32.485000 audit: BPF prog-id=118 op=LOAD Jan 14 13:40:32.485000 audit: BPF prog-id=72 op=UNLOAD Jan 14 13:40:32.488000 audit: BPF prog-id=119 op=LOAD Jan 14 13:40:32.488000 audit: BPF prog-id=73 op=UNLOAD Jan 14 13:40:32.488000 audit: BPF prog-id=120 op=LOAD Jan 14 13:40:32.488000 audit: BPF prog-id=121 op=LOAD Jan 14 13:40:32.488000 audit: BPF prog-id=74 op=UNLOAD Jan 14 13:40:32.488000 audit: BPF prog-id=75 op=UNLOAD Jan 14 13:40:32.490000 audit: BPF prog-id=122 op=LOAD Jan 14 13:40:32.490000 audit: BPF prog-id=65 op=UNLOAD Jan 14 13:40:32.509779 kernel: audit: type=1334 audit(1768398032.478:393): prog-id=115 op=LOAD Jan 14 13:40:32.490000 audit: BPF prog-id=123 op=LOAD Jan 14 13:40:32.490000 audit: BPF prog-id=124 op=LOAD Jan 14 13:40:32.490000 audit: BPF prog-id=66 op=UNLOAD Jan 14 13:40:32.490000 audit: BPF prog-id=67 op=UNLOAD Jan 14 13:40:32.492000 audit: BPF prog-id=125 op=LOAD Jan 14 13:40:32.492000 audit: BPF prog-id=68 op=UNLOAD Jan 14 13:40:32.492000 audit: BPF prog-id=126 op=LOAD Jan 14 13:40:32.492000 audit: BPF prog-id=127 op=LOAD Jan 14 13:40:32.492000 audit: BPF prog-id=69 op=UNLOAD Jan 14 13:40:32.492000 audit: BPF prog-id=70 op=UNLOAD Jan 14 13:40:32.494000 audit: BPF prog-id=128 op=LOAD Jan 14 13:40:32.494000 audit: BPF prog-id=76 op=UNLOAD Jan 14 13:40:32.494000 audit: BPF prog-id=129 op=LOAD Jan 14 13:40:32.494000 audit: BPF prog-id=130 op=LOAD Jan 14 13:40:32.494000 audit: BPF prog-id=77 op=UNLOAD Jan 14 13:40:32.494000 audit: BPF prog-id=78 op=UNLOAD Jan 14 13:40:32.919830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:40:32.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:32.928862 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:40:33.142155 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:40:33.142155 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:40:33.142155 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:40:33.142155 kubelet[2830]: I0114 13:40:33.140208 2830 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:40:33.160780 kubelet[2830]: I0114 13:40:33.159768 2830 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 13:40:33.160780 kubelet[2830]: I0114 13:40:33.159796 2830 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:40:33.160780 kubelet[2830]: I0114 13:40:33.160057 2830 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 13:40:33.161776 kubelet[2830]: I0114 13:40:33.161653 2830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:40:33.164333 kubelet[2830]: I0114 13:40:33.164313 2830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:40:33.171381 kubelet[2830]: I0114 13:40:33.171305 2830 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:40:33.184329 kubelet[2830]: I0114 13:40:33.184218 2830 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:40:33.184615 kubelet[2830]: I0114 13:40:33.184545 2830 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:40:33.184845 kubelet[2830]: I0114 13:40:33.184588 2830 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:40:33.185028 kubelet[2830]: I0114 13:40:33.184863 2830 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:40:33.185028 kubelet[2830]: I0114 13:40:33.184874 2830 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 13:40:33.185028 kubelet[2830]: I0114 13:40:33.184937 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:40:33.185252 kubelet[2830]: I0114 13:40:33.185209 2830 kubelet.go:446] "Attempting to sync node with API server" Jan 14 13:40:33.185252 kubelet[2830]: I0114 13:40:33.185250 2830 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:40:33.185317 kubelet[2830]: I0114 13:40:33.185272 2830 kubelet.go:352] "Adding apiserver pod source" Jan 14 13:40:33.185317 kubelet[2830]: I0114 13:40:33.185284 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:40:33.187567 kubelet[2830]: I0114 13:40:33.187498 2830 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:40:33.189153 kubelet[2830]: I0114 13:40:33.189037 2830 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:40:33.189763 kubelet[2830]: I0114 13:40:33.189616 2830 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:40:33.189763 kubelet[2830]: I0114 13:40:33.189648 2830 server.go:1287] "Started kubelet" Jan 14 13:40:33.192415 kubelet[2830]: I0114 13:40:33.192365 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:40:33.193645 kubelet[2830]: I0114 13:40:33.193310 2830 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:40:33.193645 kubelet[2830]: I0114 13:40:33.193403 2830 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:40:33.194826 kubelet[2830]: I0114 13:40:33.194733 2830 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:40:33.195581 kubelet[2830]: I0114 13:40:33.195522 2830 server.go:479] "Adding debug handlers to kubelet server" Jan 14 13:40:33.196928 kubelet[2830]: I0114 13:40:33.196872 2830 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:40:33.196928 kubelet[2830]: I0114 13:40:33.196872 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:40:33.197580 kubelet[2830]: I0114 13:40:33.197093 2830 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:40:33.197580 kubelet[2830]: I0114 13:40:33.197288 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:40:33.198180 kubelet[2830]: E0114 13:40:33.198067 2830 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 13:40:33.200333 kubelet[2830]: I0114 13:40:33.200262 2830 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:40:33.200333 kubelet[2830]: I0114 13:40:33.200411 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:40:33.203306 kubelet[2830]: E0114 13:40:33.203019 2830 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:40:33.207119 kubelet[2830]: I0114 13:40:33.206927 2830 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:40:33.241532 kubelet[2830]: I0114 13:40:33.240107 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:40:33.245169 kubelet[2830]: I0114 13:40:33.244903 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:40:33.245427 kubelet[2830]: I0114 13:40:33.245307 2830 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 13:40:33.246074 kubelet[2830]: I0114 13:40:33.245652 2830 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:40:33.248811 kubelet[2830]: I0114 13:40:33.248065 2830 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 13:40:33.248811 kubelet[2830]: E0114 13:40:33.248136 2830 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:40:33.366932 kubelet[2830]: E0114 13:40:33.363599 2830 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449003 2830 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449059 2830 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449092 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449547 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449566 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449592 2830 policy_none.go:49] "None policy: Start" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449607 2830 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449623 2830 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:40:33.451389 kubelet[2830]: I0114 13:40:33.449891 2830 state_mem.go:75] "Updated machine memory state" Jan 14 13:40:33.461892 kubelet[2830]: I0114 13:40:33.461653 2830 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:40:33.462393 kubelet[2830]: I0114 13:40:33.462284 2830 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:40:33.462451 kubelet[2830]: I0114 13:40:33.462339 2830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:40:33.463238 kubelet[2830]: I0114 13:40:33.463110 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:40:33.467355 kubelet[2830]: E0114 13:40:33.467333 2830 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:40:33.565430 kubelet[2830]: I0114 13:40:33.565239 2830 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:33.565781 kubelet[2830]: I0114 13:40:33.565250 2830 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.565966 kubelet[2830]: I0114 13:40:33.565891 2830 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:33.578934 kubelet[2830]: E0114 13:40:33.578870 2830 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.581065 kubelet[2830]: E0114 13:40:33.580889 2830 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:33.581065 kubelet[2830]: E0114 13:40:33.581006 2830 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:33.581832 kubelet[2830]: I0114 13:40:33.581276 2830 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:40:33.600052 kubelet[2830]: I0114 13:40:33.600005 2830 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 13:40:33.600444 kubelet[2830]: I0114 13:40:33.600348 2830 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 13:40:33.613603 kubelet[2830]: I0114 13:40:33.613165 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.617875 kubelet[2830]: I0114 13:40:33.616868 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.617959 kubelet[2830]: I0114 13:40:33.617901 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.617959 kubelet[2830]: I0114 13:40:33.617925 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:33.617959 kubelet[2830]: I0114 13:40:33.617944 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.618090 kubelet[2830]: I0114 13:40:33.617960 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:40:33.618090 kubelet[2830]: I0114 13:40:33.617979 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 13:40:33.618090 kubelet[2830]: I0114 13:40:33.617993 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:33.618090 kubelet[2830]: I0114 13:40:33.618008 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b04009aecd533090299b249e42c6d46c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b04009aecd533090299b249e42c6d46c\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:40:33.880055 kubelet[2830]: E0114 13:40:33.879951 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:33.881806 kubelet[2830]: E0114 13:40:33.881759 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:33.882292 kubelet[2830]: E0114 13:40:33.882204 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:34.204649 kubelet[2830]: I0114 13:40:34.186122 2830 apiserver.go:52] "Watching apiserver" Jan 14 13:40:34.277888 kubelet[2830]: E0114 13:40:34.277107 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:34.277888 kubelet[2830]: E0114 13:40:34.277812 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:34.279255 kubelet[2830]: E0114 13:40:34.279181 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:34.294525 kubelet[2830]: I0114 13:40:34.294435 2830 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:40:34.312609 kubelet[2830]: I0114 13:40:34.312302 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.31205209 podStartE2EDuration="5.31205209s" podCreationTimestamp="2026-01-14 13:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:40:34.311323831 +0000 UTC m=+1.371894268" watchObservedRunningTime="2026-01-14 13:40:34.31205209 +0000 UTC m=+1.372622528" Jan 14 13:40:34.370146 kubelet[2830]: I0114 13:40:34.369035 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.3690065799999998 podStartE2EDuration="3.36900658s" podCreationTimestamp="2026-01-14 13:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:40:34.354900725 +0000 UTC m=+1.415471161" watchObservedRunningTime="2026-01-14 13:40:34.36900658 +0000 UTC m=+1.429577037" Jan 14 13:40:34.370146 kubelet[2830]: I0114 13:40:34.369176 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.369164585 podStartE2EDuration="5.369164585s" podCreationTimestamp="2026-01-14 13:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:40:34.368256403 +0000 UTC m=+1.428826870" watchObservedRunningTime="2026-01-14 13:40:34.369164585 +0000 UTC m=+1.429735042" Jan 14 13:40:35.286097 kubelet[2830]: E0114 13:40:35.285882 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:35.288143 kubelet[2830]: E0114 13:40:35.287422 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:35.631405 kubelet[2830]: I0114 13:40:35.631311 2830 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:40:35.634037 containerd[1601]: time="2026-01-14T13:40:35.633614013Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:40:35.635065 kubelet[2830]: I0114 13:40:35.634530 2830 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:40:36.295750 kubelet[2830]: E0114 13:40:36.295044 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:36.425218 systemd[1]: Created slice kubepods-besteffort-pod5d515b9b_7a23_421f_9bad_de2ed3da470a.slice - libcontainer container kubepods-besteffort-pod5d515b9b_7a23_421f_9bad_de2ed3da470a.slice. Jan 14 13:40:36.459400 kubelet[2830]: I0114 13:40:36.459094 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d515b9b-7a23-421f-9bad-de2ed3da470a-xtables-lock\") pod \"kube-proxy-6mhs6\" (UID: \"5d515b9b-7a23-421f-9bad-de2ed3da470a\") " pod="kube-system/kube-proxy-6mhs6" Jan 14 13:40:36.459400 kubelet[2830]: I0114 13:40:36.459237 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhn5\" (UniqueName: \"kubernetes.io/projected/5d515b9b-7a23-421f-9bad-de2ed3da470a-kube-api-access-twhn5\") pod \"kube-proxy-6mhs6\" (UID: \"5d515b9b-7a23-421f-9bad-de2ed3da470a\") " pod="kube-system/kube-proxy-6mhs6" Jan 14 13:40:36.459400 kubelet[2830]: I0114 13:40:36.459343 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d515b9b-7a23-421f-9bad-de2ed3da470a-kube-proxy\") pod \"kube-proxy-6mhs6\" (UID: \"5d515b9b-7a23-421f-9bad-de2ed3da470a\") " pod="kube-system/kube-proxy-6mhs6" Jan 14 13:40:36.459400 kubelet[2830]: I0114 13:40:36.459377 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d515b9b-7a23-421f-9bad-de2ed3da470a-lib-modules\") pod \"kube-proxy-6mhs6\" (UID: \"5d515b9b-7a23-421f-9bad-de2ed3da470a\") " pod="kube-system/kube-proxy-6mhs6" Jan 14 13:40:36.713320 systemd[1]: Created slice kubepods-besteffort-pod7a7718b9_7b54_444f_aace_23df1d67983a.slice - libcontainer container kubepods-besteffort-pod7a7718b9_7b54_444f_aace_23df1d67983a.slice. Jan 14 13:40:36.762907 kubelet[2830]: I0114 13:40:36.762067 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a7718b9-7b54-444f-aace-23df1d67983a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-86pnr\" (UID: \"7a7718b9-7b54-444f-aace-23df1d67983a\") " pod="tigera-operator/tigera-operator-7dcd859c48-86pnr" Jan 14 13:40:36.762907 kubelet[2830]: I0114 13:40:36.762413 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qp4\" (UniqueName: \"kubernetes.io/projected/7a7718b9-7b54-444f-aace-23df1d67983a-kube-api-access-r6qp4\") pod \"tigera-operator-7dcd859c48-86pnr\" (UID: \"7a7718b9-7b54-444f-aace-23df1d67983a\") " pod="tigera-operator/tigera-operator-7dcd859c48-86pnr" Jan 14 13:40:36.765798 kubelet[2830]: E0114 13:40:36.765625 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:36.772464 containerd[1601]: time="2026-01-14T13:40:36.772333988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mhs6,Uid:5d515b9b-7a23-421f-9bad-de2ed3da470a,Namespace:kube-system,Attempt:0,}" Jan 14 13:40:36.858366 containerd[1601]: time="2026-01-14T13:40:36.858280615Z" level=info msg="connecting to shim 002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf" address="unix:///run/containerd/s/362e925849223ac38fe76d5b25d72ab780d00c81f169b604bfd7fc512ea631a9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:36.973187 systemd[1]: Started cri-containerd-002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf.scope - libcontainer container 002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf. Jan 14 13:40:36.994000 audit: BPF prog-id=131 op=LOAD Jan 14 13:40:36.996000 audit: BPF prog-id=132 op=LOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=132 op=UNLOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=133 op=LOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=134 op=LOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=134 op=UNLOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=133 op=UNLOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:36.996000 audit: BPF prog-id=135 op=LOAD Jan 14 13:40:36.996000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2886 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:36.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326431373430393730323138623564393165376331653034326439 Jan 14 13:40:37.019790 containerd[1601]: time="2026-01-14T13:40:37.019646581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-86pnr,Uid:7a7718b9-7b54-444f-aace-23df1d67983a,Namespace:tigera-operator,Attempt:0,}" Jan 14 13:40:37.031463 containerd[1601]: time="2026-01-14T13:40:37.031403612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mhs6,Uid:5d515b9b-7a23-421f-9bad-de2ed3da470a,Namespace:kube-system,Attempt:0,} returns sandbox id \"002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf\"" Jan 14 13:40:37.033465 kubelet[2830]: E0114 13:40:37.033436 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:37.046432 containerd[1601]: time="2026-01-14T13:40:37.044634703Z" level=info msg="CreateContainer within sandbox \"002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:40:37.079296 containerd[1601]: time="2026-01-14T13:40:37.079216300Z" level=info msg="Container e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:37.081755 containerd[1601]: time="2026-01-14T13:40:37.081564170Z" level=info msg="connecting to shim a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449" address="unix:///run/containerd/s/d083e6de6213024ae1bf2483c1b4a6feb8f019498ffe84a137ce9aef9ae9d185" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:37.093922 containerd[1601]: time="2026-01-14T13:40:37.093792231Z" level=info msg="CreateContainer within sandbox \"002d1740970218b5d91e7c1e042d942e9025c012c824410a895ff5c85883d4bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49\"" Jan 14 13:40:37.095078 containerd[1601]: time="2026-01-14T13:40:37.094972893Z" level=info msg="StartContainer for \"e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49\"" Jan 14 13:40:37.096847 containerd[1601]: time="2026-01-14T13:40:37.096789423Z" level=info msg="connecting to shim e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49" address="unix:///run/containerd/s/362e925849223ac38fe76d5b25d72ab780d00c81f169b604bfd7fc512ea631a9" protocol=ttrpc version=3 Jan 14 13:40:37.133117 systemd[1]: Started cri-containerd-e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49.scope - libcontainer container e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49. Jan 14 13:40:37.138498 systemd[1]: Started cri-containerd-a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449.scope - libcontainer container a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449. Jan 14 13:40:37.167000 audit: BPF prog-id=136 op=LOAD Jan 14 13:40:37.168000 audit: BPF prog-id=137 op=LOAD Jan 14 13:40:37.168000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.168000 audit: BPF prog-id=137 op=UNLOAD Jan 14 13:40:37.168000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.168000 audit: BPF prog-id=138 op=LOAD Jan 14 13:40:37.168000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.168000 audit: BPF prog-id=139 op=LOAD Jan 14 13:40:37.168000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.169000 audit: BPF prog-id=139 op=UNLOAD Jan 14 13:40:37.169000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.169000 audit: BPF prog-id=138 op=UNLOAD Jan 14 13:40:37.169000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.169000 audit: BPF prog-id=140 op=LOAD Jan 14 13:40:37.169000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136353730383931343837666238323064303834316261393366636461 Jan 14 13:40:37.212171 kubelet[2830]: E0114 13:40:37.211875 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:37.308611 kubelet[2830]: E0114 13:40:37.308261 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:37.308611 kubelet[2830]: E0114 13:40:37.308558 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:37.354021 containerd[1601]: time="2026-01-14T13:40:37.353883474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-86pnr,Uid:7a7718b9-7b54-444f-aace-23df1d67983a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449\"" Jan 14 13:40:37.356000 audit: BPF prog-id=141 op=LOAD Jan 14 13:40:37.356000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2886 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653536626332663637323230626166326161323865666436393765 Jan 14 13:40:37.356000 audit: BPF prog-id=142 op=LOAD Jan 14 13:40:37.356000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2886 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653536626332663637323230626166326161323865666436393765 Jan 14 13:40:37.356000 audit: BPF prog-id=142 op=UNLOAD Jan 14 13:40:37.356000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653536626332663637323230626166326161323865666436393765 Jan 14 13:40:37.356000 audit: BPF prog-id=141 op=UNLOAD Jan 14 13:40:37.356000 audit[2946]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2886 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653536626332663637323230626166326161323865666436393765 Jan 14 13:40:37.356000 audit: BPF prog-id=143 op=LOAD Jan 14 13:40:37.356000 audit[2946]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2886 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653536626332663637323230626166326161323865666436393765 Jan 14 13:40:37.361953 containerd[1601]: time="2026-01-14T13:40:37.361785737Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 13:40:37.445542 containerd[1601]: time="2026-01-14T13:40:37.445248886Z" level=info msg="StartContainer for \"e1e56bc2f67220baf2aa28efd697e1e3256fee855e8926cc39053e2bb5f26b49\" returns successfully" Jan 14 13:40:37.857000 audit[3037]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:37.861162 kernel: kauditd_printk_skb: 91 callbacks suppressed Jan 14 13:40:37.861443 kernel: audit: type=1325 audit(1768398037.857:447): table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:37.857000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef4ddc170 a2=0 a3=7ffef4ddc15c items=0 ppid=2969 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.881056 kernel: audit: type=1300 audit(1768398037.857:447): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef4ddc170 a2=0 a3=7ffef4ddc15c items=0 ppid=2969 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:40:37.867000 audit[3036]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.892886 kernel: audit: type=1327 audit(1768398037.857:447): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:40:37.893011 kernel: audit: type=1325 audit(1768398037.867:448): table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.893060 kernel: audit: type=1300 audit(1768398037.867:448): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe93c0e6c0 a2=0 a3=7ffe93c0e6ac items=0 ppid=2969 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.867000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe93c0e6c0 a2=0 a3=7ffe93c0e6ac items=0 ppid=2969 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.902036 kernel: audit: type=1327 audit(1768398037.867:448): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:40:37.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:40:37.906728 kernel: audit: type=1325 audit(1768398037.870:449): table=nat:56 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.870000 audit[3042]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.870000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd836cbf0 a2=0 a3=7fffd836cbdc items=0 ppid=2969 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.920758 kernel: audit: type=1300 audit(1768398037.870:449): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd836cbf0 a2=0 a3=7fffd836cbdc items=0 ppid=2969 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.920827 kernel: audit: type=1327 audit(1768398037.870:449): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:40:37.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:40:37.925007 kernel: audit: type=1325 audit(1768398037.871:450): table=nat:57 family=10 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:37.871000 audit[3039]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:37.871000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0fb8ef80 a2=0 a3=7ffe0fb8ef6c items=0 ppid=2969 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:40:37.873000 audit[3043]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.873000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc68316ce0 a2=0 a3=7ffc68316ccc items=0 ppid=2969 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:40:37.882000 audit[3044]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:37.882000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd10f85b50 a2=0 a3=7ffd10f85b3c items=0 ppid=2969 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:40:37.959000 audit[3045]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.959000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4dbb93c0 a2=0 a3=7ffe4dbb93ac items=0 ppid=2969 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:40:37.964000 audit[3047]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.964000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe5348b490 a2=0 a3=7ffe5348b47c items=0 ppid=2969 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.964000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 13:40:37.972000 audit[3050]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.972000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe364bec10 a2=0 a3=7ffe364bebfc items=0 ppid=2969 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.972000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 13:40:37.975000 audit[3051]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.975000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbc5d09e0 a2=0 a3=7ffcbc5d09cc items=0 ppid=2969 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.975000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:40:37.980000 audit[3053]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.980000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd67fd1d10 a2=0 a3=7ffd67fd1cfc items=0 ppid=2969 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:40:37.982000 audit[3054]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.982000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeed7cb2e0 a2=0 a3=7ffeed7cb2cc items=0 ppid=2969 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:40:37.989000 audit[3056]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.989000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe6801930 a2=0 a3=7fffe680191c items=0 ppid=2969 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:40:37.997000 audit[3059]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:37.997000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd9e7d6ff0 a2=0 a3=7ffd9e7d6fdc items=0 ppid=2969 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:37.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 13:40:38.000000 audit[3060]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.000000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcecb2a390 a2=0 a3=7ffcecb2a37c items=0 ppid=2969 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:40:38.005000 audit[3062]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.005000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdecb2e750 a2=0 a3=7ffdecb2e73c items=0 ppid=2969 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:40:38.008000 audit[3063]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.008000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8080a430 a2=0 a3=7ffc8080a41c items=0 ppid=2969 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:40:38.014000 audit[3065]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.014000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd30387c10 a2=0 a3=7ffd30387bfc items=0 ppid=2969 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:40:38.022000 audit[3068]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.022000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0f4d21d0 a2=0 a3=7ffd0f4d21bc items=0 ppid=2969 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:40:38.029000 audit[3071]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.029000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd894ba380 a2=0 a3=7ffd894ba36c items=0 ppid=2969 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:40:38.032000 audit[3072]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.032000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd806da140 a2=0 a3=7ffd806da12c items=0 ppid=2969 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:40:38.041000 audit[3074]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.041000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff8ebcea90 a2=0 a3=7fff8ebcea7c items=0 ppid=2969 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:40:38.066000 audit[3077]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.066000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc74178900 a2=0 a3=7ffc741788ec items=0 ppid=2969 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:40:38.069000 audit[3078]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.069000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdea53b2a0 a2=0 a3=7ffdea53b28c items=0 ppid=2969 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:40:38.076000 audit[3080]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:40:38.076000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffccdaa0bb0 a2=0 a3=7ffccdaa0b9c items=0 ppid=2969 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:40:38.136000 audit[3086]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:38.136000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe72c830e0 a2=0 a3=7ffe72c830cc items=0 ppid=2969 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:38.169000 audit[3086]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:38.169000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe72c830e0 a2=0 a3=7ffe72c830cc items=0 ppid=2969 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:38.175000 audit[3091]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.175000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd3ebdd7b0 a2=0 a3=7ffd3ebdd79c items=0 ppid=2969 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:40:38.193000 audit[3093]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.193000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffefc327340 a2=0 a3=7ffefc32732c items=0 ppid=2969 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 13:40:38.207000 audit[3096]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.207000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffb7c8a160 a2=0 a3=7fffb7c8a14c items=0 ppid=2969 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 13:40:38.210000 audit[3097]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.210000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77aa1fc0 a2=0 a3=7ffd77aa1fac items=0 ppid=2969 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:40:38.216000 audit[3099]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.216000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9a754f80 a2=0 a3=7fff9a754f6c items=0 ppid=2969 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:40:38.219000 audit[3100]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.219000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff96da9aa0 a2=0 a3=7fff96da9a8c items=0 ppid=2969 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:40:38.254000 audit[3102]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.254000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdbdb89690 a2=0 a3=7ffdbdb8967c items=0 ppid=2969 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 13:40:38.265000 audit[3105]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.265000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff2527d860 a2=0 a3=7fff2527d84c items=0 ppid=2969 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:40:38.268000 audit[3106]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.268000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9404a7a0 a2=0 a3=7ffc9404a78c items=0 ppid=2969 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:40:38.274000 audit[3108]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.274000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1a2c8320 a2=0 a3=7ffd1a2c830c items=0 ppid=2969 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:40:38.277000 audit[3109]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.277000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3227d370 a2=0 a3=7ffd3227d35c items=0 ppid=2969 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:40:38.288000 audit[3111]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.288000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe89598360 a2=0 a3=7ffe8959834c items=0 ppid=2969 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:40:38.302000 audit[3114]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.302000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff63a044e0 a2=0 a3=7fff63a044cc items=0 ppid=2969 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:40:38.310000 audit[3117]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.310000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf0d7b850 a2=0 a3=7ffdf0d7b83c items=0 ppid=2969 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 13:40:38.313000 audit[3118]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.313000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe703693e0 a2=0 a3=7ffe703693cc items=0 ppid=2969 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:40:38.318505 kubelet[2830]: E0114 13:40:38.318449 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:38.319465 kubelet[2830]: E0114 13:40:38.319398 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:38.320000 audit[3120]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.320000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffecc1be660 a2=0 a3=7ffecc1be64c items=0 ppid=2969 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:40:38.332000 audit[3123]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.332000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5b37f880 a2=0 a3=7fff5b37f86c items=0 ppid=2969 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:40:38.340760 kubelet[2830]: I0114 13:40:38.340396 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6mhs6" podStartSLOduration=2.340372119 podStartE2EDuration="2.340372119s" podCreationTimestamp="2026-01-14 13:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:40:38.340032265 +0000 UTC m=+5.400602701" watchObservedRunningTime="2026-01-14 13:40:38.340372119 +0000 UTC m=+5.400942556" Jan 14 13:40:38.344000 audit[3124]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.344000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc44793400 a2=0 a3=7ffc447933ec items=0 ppid=2969 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:40:38.350000 audit[3126]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.350000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff668920f0 a2=0 a3=7fff668920dc items=0 ppid=2969 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:40:38.353000 audit[3127]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.353000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9e58f8b0 a2=0 a3=7ffe9e58f89c items=0 ppid=2969 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:40:38.358000 audit[3129]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.358000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc98c35c30 a2=0 a3=7ffc98c35c1c items=0 ppid=2969 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:40:38.366000 audit[3132]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:40:38.366000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed1bd6f80 a2=0 a3=7ffed1bd6f6c items=0 ppid=2969 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:40:38.374000 audit[3134]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:40:38.374000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffce8816490 a2=0 a3=7ffce881647c items=0 ppid=2969 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.374000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:38.375000 audit[3134]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:40:38.375000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffce8816490 a2=0 a3=7ffce881647c items=0 ppid=2969 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:38.375000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:38.504520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379476026.mount: Deactivated successfully. Jan 14 13:40:39.333875 kubelet[2830]: E0114 13:40:39.332846 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:41.459020 containerd[1601]: time="2026-01-14T13:40:41.458866161Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:41.460071 containerd[1601]: time="2026-01-14T13:40:41.460025404Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 14 13:40:41.461402 containerd[1601]: time="2026-01-14T13:40:41.461338355Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:41.464922 containerd[1601]: time="2026-01-14T13:40:41.464843943Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:41.465452 containerd[1601]: time="2026-01-14T13:40:41.465346105Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.103378219s" Jan 14 13:40:41.465452 containerd[1601]: time="2026-01-14T13:40:41.465436604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 13:40:41.468330 containerd[1601]: time="2026-01-14T13:40:41.468267463Z" level=info msg="CreateContainer within sandbox \"a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 13:40:41.479414 containerd[1601]: time="2026-01-14T13:40:41.479338216Z" level=info msg="Container e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:41.509423 containerd[1601]: time="2026-01-14T13:40:41.508095815Z" level=info msg="CreateContainer within sandbox \"a6570891487fb820d0841ba93fcda3add2399b0155fe965b29ec581be7a1f449\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea\"" Jan 14 13:40:41.517627 containerd[1601]: time="2026-01-14T13:40:41.516768027Z" level=info msg="StartContainer for \"e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea\"" Jan 14 13:40:41.521505 containerd[1601]: time="2026-01-14T13:40:41.521403859Z" level=info msg="connecting to shim e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea" address="unix:///run/containerd/s/d083e6de6213024ae1bf2483c1b4a6feb8f019498ffe84a137ce9aef9ae9d185" protocol=ttrpc version=3 Jan 14 13:40:41.621102 systemd[1]: Started cri-containerd-e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea.scope - libcontainer container e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea. Jan 14 13:40:41.672000 audit: BPF prog-id=144 op=LOAD Jan 14 13:40:41.673000 audit: BPF prog-id=145 op=LOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=145 op=UNLOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=146 op=LOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=147 op=LOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=147 op=UNLOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=146 op=UNLOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.673000 audit: BPF prog-id=148 op=LOAD Jan 14 13:40:41.673000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2934 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:41.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539643637616662363461343934636239343331313866306336326331 Jan 14 13:40:41.703016 containerd[1601]: time="2026-01-14T13:40:41.702892087Z" level=info msg="StartContainer for \"e9d67afb64a494cb943118f0c62c15897dcbec92f660e5359babbe7ae726eaea\" returns successfully" Jan 14 13:40:42.409810 kubelet[2830]: I0114 13:40:42.409533 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-86pnr" podStartSLOduration=2.302160302 podStartE2EDuration="6.409350866s" podCreationTimestamp="2026-01-14 13:40:36 +0000 UTC" firstStartedPulling="2026-01-14 13:40:37.359294508 +0000 UTC m=+4.419864944" lastFinishedPulling="2026-01-14 13:40:41.466485072 +0000 UTC m=+8.527055508" observedRunningTime="2026-01-14 13:40:42.409062741 +0000 UTC m=+9.469633177" watchObservedRunningTime="2026-01-14 13:40:42.409350866 +0000 UTC m=+9.469921313" Jan 14 13:40:43.509871 kubelet[2830]: E0114 13:40:43.509774 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:44.394406 kubelet[2830]: E0114 13:40:44.394316 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:48.757000 audit[1841]: USER_END pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:40:48.758265 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 14 13:40:48.768409 kernel: kauditd_printk_skb: 165 callbacks suppressed Jan 14 13:40:48.768501 kernel: audit: type=1106 audit(1768398048.757:506): pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:40:48.768534 sshd[1840]: Connection closed by 10.0.0.1 port 32878 Jan 14 13:40:48.770899 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Jan 14 13:40:48.757000 audit[1841]: CRED_DISP pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:40:48.780316 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:32878.service: Deactivated successfully. Jan 14 13:40:48.794261 kernel: audit: type=1104 audit(1768398048.757:507): pid=1841 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:40:48.794353 kernel: audit: type=1106 audit(1768398048.770:508): pid=1836 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:40:48.770000 audit[1836]: USER_END pid=1836 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:40:48.786260 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:40:48.788624 systemd[1]: session-10.scope: Consumed 9.813s CPU time, 218.5M memory peak. Jan 14 13:40:48.793238 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:40:48.770000 audit[1836]: CRED_DISP pid=1836 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:40:48.796236 systemd-logind[1577]: Removed session 10. Jan 14 13:40:48.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.106:22-10.0.0.1:32878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:48.813936 kernel: audit: type=1104 audit(1768398048.770:509): pid=1836 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:40:48.814074 kernel: audit: type=1131 audit(1768398048.780:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.106:22-10.0.0.1:32878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:40:49.091000 audit[3232]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.099778 kernel: audit: type=1325 audit(1768398049.091:511): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.091000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe488b7a60 a2=0 a3=7ffe488b7a4c items=0 ppid=2969 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.113785 kernel: audit: type=1300 audit(1768398049.091:511): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe488b7a60 a2=0 a3=7ffe488b7a4c items=0 ppid=2969 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:49.113000 audit[3232]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.127024 kernel: audit: type=1327 audit(1768398049.091:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:49.127088 kernel: audit: type=1325 audit(1768398049.113:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.127117 kernel: audit: type=1300 audit(1768398049.113:512): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe488b7a60 a2=0 a3=0 items=0 ppid=2969 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.113000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe488b7a60 a2=0 a3=0 items=0 ppid=2969 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:49.156000 audit[3234]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.156000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd501e8b30 a2=0 a3=7ffd501e8b1c items=0 ppid=2969 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:49.162000 audit[3234]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:49.162000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd501e8b30 a2=0 a3=0 items=0 ppid=2969 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:49.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:51.177000 audit[3236]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:51.177000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff90933210 a2=0 a3=7fff909331fc items=0 ppid=2969 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:51.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:51.185000 audit[3236]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:51.185000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff90933210 a2=0 a3=0 items=0 ppid=2969 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:51.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:51.206000 audit[3238]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:51.206000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcaeec1190 a2=0 a3=7ffcaeec117c items=0 ppid=2969 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:51.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:51.212000 audit[3238]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:51.212000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcaeec1190 a2=0 a3=0 items=0 ppid=2969 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:51.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:52.225000 audit[3240]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:52.225000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbe983e00 a2=0 a3=7ffdbe983dec items=0 ppid=2969 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:52.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:52.242000 audit[3240]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:52.242000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbe983e00 a2=0 a3=0 items=0 ppid=2969 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:52.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:53.036874 systemd[1]: Created slice kubepods-besteffort-podb51f7f11_dc93_4f27_9025_c29b15252b60.slice - libcontainer container kubepods-besteffort-podb51f7f11_dc93_4f27_9025_c29b15252b60.slice. Jan 14 13:40:53.087580 kubelet[2830]: I0114 13:40:53.087474 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51f7f11-dc93-4f27-9025-c29b15252b60-tigera-ca-bundle\") pod \"calico-typha-58998775d-7mnwx\" (UID: \"b51f7f11-dc93-4f27-9025-c29b15252b60\") " pod="calico-system/calico-typha-58998775d-7mnwx" Jan 14 13:40:53.087580 kubelet[2830]: I0114 13:40:53.087564 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b51f7f11-dc93-4f27-9025-c29b15252b60-typha-certs\") pod \"calico-typha-58998775d-7mnwx\" (UID: \"b51f7f11-dc93-4f27-9025-c29b15252b60\") " pod="calico-system/calico-typha-58998775d-7mnwx" Jan 14 13:40:53.088190 kubelet[2830]: I0114 13:40:53.087596 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdkw\" (UniqueName: \"kubernetes.io/projected/b51f7f11-dc93-4f27-9025-c29b15252b60-kube-api-access-mpdkw\") pod \"calico-typha-58998775d-7mnwx\" (UID: \"b51f7f11-dc93-4f27-9025-c29b15252b60\") " pod="calico-system/calico-typha-58998775d-7mnwx" Jan 14 13:40:53.233417 systemd[1]: Created slice kubepods-besteffort-podd539aa2e_c430_4d64_a59e_4b30d8ffe1c2.slice - libcontainer container kubepods-besteffort-podd539aa2e_c430_4d64_a59e_4b30d8ffe1c2.slice. Jan 14 13:40:53.276000 audit[3244]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:53.276000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8e5dcfe0 a2=0 a3=7ffd8e5dcfcc items=0 ppid=2969 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:53.288792 kubelet[2830]: I0114 13:40:53.288558 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-tigera-ca-bundle\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288792 kubelet[2830]: I0114 13:40:53.288622 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-policysync\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288792 kubelet[2830]: I0114 13:40:53.288642 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-node-certs\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288792 kubelet[2830]: I0114 13:40:53.288660 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-cni-bin-dir\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288792 kubelet[2830]: I0114 13:40:53.288755 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-var-run-calico\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288965 kubelet[2830]: I0114 13:40:53.288774 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-cni-log-dir\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288965 kubelet[2830]: I0114 13:40:53.288794 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-cni-net-dir\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288965 kubelet[2830]: I0114 13:40:53.288811 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-xtables-lock\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288965 kubelet[2830]: I0114 13:40:53.288830 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl859\" (UniqueName: \"kubernetes.io/projected/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-kube-api-access-sl859\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.288965 kubelet[2830]: I0114 13:40:53.288853 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-flexvol-driver-host\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.289067 kubelet[2830]: I0114 13:40:53.288871 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-lib-modules\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.289067 kubelet[2830]: I0114 13:40:53.288884 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d539aa2e-c430-4d64-a59e-4b30d8ffe1c2-var-lib-calico\") pod \"calico-node-nnjw8\" (UID: \"d539aa2e-c430-4d64-a59e-4b30d8ffe1c2\") " pod="calico-system/calico-node-nnjw8" Jan 14 13:40:53.289000 audit[3244]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:40:53.289000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8e5dcfe0 a2=0 a3=0 items=0 ppid=2969 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:40:53.341345 kubelet[2830]: E0114 13:40:53.341268 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:53.342100 containerd[1601]: time="2026-01-14T13:40:53.342001746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58998775d-7mnwx,Uid:b51f7f11-dc93-4f27-9025-c29b15252b60,Namespace:calico-system,Attempt:0,}" Jan 14 13:40:53.367867 containerd[1601]: time="2026-01-14T13:40:53.367815885Z" level=info msg="connecting to shim 6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0" address="unix:///run/containerd/s/92318cb2d610891bebadbae64f45d4d15898ba3b12a81a94d50449bfb4daf7af" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:53.396115 kubelet[2830]: E0114 13:40:53.396067 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.396115 kubelet[2830]: W0114 13:40:53.396097 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.397775 kubelet[2830]: E0114 13:40:53.397094 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.397775 kubelet[2830]: E0114 13:40:53.397354 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.397775 kubelet[2830]: W0114 13:40:53.397581 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.397775 kubelet[2830]: E0114 13:40:53.397595 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.398385 kubelet[2830]: E0114 13:40:53.398276 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.398385 kubelet[2830]: W0114 13:40:53.398372 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.398499 kubelet[2830]: E0114 13:40:53.398465 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.399050 kubelet[2830]: E0114 13:40:53.398948 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.399050 kubelet[2830]: W0114 13:40:53.399041 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.399260 kubelet[2830]: E0114 13:40:53.399161 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.400928 kubelet[2830]: E0114 13:40:53.400185 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.400928 kubelet[2830]: W0114 13:40:53.400215 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.400928 kubelet[2830]: E0114 13:40:53.400591 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.400928 kubelet[2830]: W0114 13:40:53.400608 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.401034 kubelet[2830]: E0114 13:40:53.400960 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.401034 kubelet[2830]: W0114 13:40:53.400972 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401228 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.402797 kubelet[2830]: W0114 13:40:53.401243 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401254 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401420 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401442 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401626 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.402797 kubelet[2830]: W0114 13:40:53.401636 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.402797 kubelet[2830]: E0114 13:40:53.401657 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.403660 kubelet[2830]: E0114 13:40:53.401943 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.403660 kubelet[2830]: E0114 13:40:53.402006 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.403660 kubelet[2830]: W0114 13:40:53.403492 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.403660 kubelet[2830]: E0114 13:40:53.403506 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.403913 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.406119 kubelet[2830]: W0114 13:40:53.403923 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.403934 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.404554 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.406119 kubelet[2830]: W0114 13:40:53.404565 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.404888 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.405737 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.406119 kubelet[2830]: W0114 13:40:53.405748 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.406119 kubelet[2830]: E0114 13:40:53.405761 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.409487 kubelet[2830]: E0114 13:40:53.409353 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.409487 kubelet[2830]: W0114 13:40:53.409375 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.409487 kubelet[2830]: E0114 13:40:53.409394 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.411146 kubelet[2830]: E0114 13:40:53.411027 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.411146 kubelet[2830]: W0114 13:40:53.411043 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.411146 kubelet[2830]: E0114 13:40:53.411056 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.422813 kubelet[2830]: E0114 13:40:53.420575 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:40:53.429843 systemd[1]: Started cri-containerd-6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0.scope - libcontainer container 6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0. Jan 14 13:40:53.464000 audit: BPF prog-id=149 op=LOAD Jan 14 13:40:53.466000 audit: BPF prog-id=150 op=LOAD Jan 14 13:40:53.466000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.467000 audit: BPF prog-id=150 op=UNLOAD Jan 14 13:40:53.467000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.467000 audit: BPF prog-id=151 op=LOAD Jan 14 13:40:53.467000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.468000 audit: BPF prog-id=152 op=LOAD Jan 14 13:40:53.468000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.468000 audit: BPF prog-id=152 op=UNLOAD Jan 14 13:40:53.468000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.468000 audit: BPF prog-id=151 op=UNLOAD Jan 14 13:40:53.468000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.468000 audit: BPF prog-id=153 op=LOAD Jan 14 13:40:53.468000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3253 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363338383865333766313130326535623939643466356236636266 Jan 14 13:40:53.483756 kubelet[2830]: E0114 13:40:53.483597 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.483756 kubelet[2830]: W0114 13:40:53.483635 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.483756 kubelet[2830]: E0114 13:40:53.483658 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.484177 kubelet[2830]: E0114 13:40:53.484123 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.484177 kubelet[2830]: W0114 13:40:53.484153 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.484177 kubelet[2830]: E0114 13:40:53.484165 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.484501 kubelet[2830]: E0114 13:40:53.484478 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.484501 kubelet[2830]: W0114 13:40:53.484496 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.484589 kubelet[2830]: E0114 13:40:53.484511 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.486811 kubelet[2830]: E0114 13:40:53.486628 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.486811 kubelet[2830]: W0114 13:40:53.486659 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.486811 kubelet[2830]: E0114 13:40:53.486740 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.487579 kubelet[2830]: E0114 13:40:53.487540 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.488179 kubelet[2830]: W0114 13:40:53.487763 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.488364 kubelet[2830]: E0114 13:40:53.488347 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.489015 kubelet[2830]: E0114 13:40:53.489000 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.489218 kubelet[2830]: W0114 13:40:53.489144 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.489430 kubelet[2830]: E0114 13:40:53.489335 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.490355 kubelet[2830]: E0114 13:40:53.490321 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.490355 kubelet[2830]: W0114 13:40:53.490346 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.490430 kubelet[2830]: E0114 13:40:53.490359 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.491005 kubelet[2830]: E0114 13:40:53.490979 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.491005 kubelet[2830]: W0114 13:40:53.491001 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.491077 kubelet[2830]: E0114 13:40:53.491013 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.491496 kubelet[2830]: E0114 13:40:53.491473 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.491496 kubelet[2830]: W0114 13:40:53.491494 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.491549 kubelet[2830]: E0114 13:40:53.491503 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.492743 kubelet[2830]: E0114 13:40:53.491913 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.492743 kubelet[2830]: W0114 13:40:53.491927 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.492743 kubelet[2830]: E0114 13:40:53.491937 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.492743 kubelet[2830]: E0114 13:40:53.492525 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.492743 kubelet[2830]: W0114 13:40:53.492539 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.492743 kubelet[2830]: E0114 13:40:53.492623 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.493310 kubelet[2830]: E0114 13:40:53.493234 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.493310 kubelet[2830]: W0114 13:40:53.493279 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.493310 kubelet[2830]: E0114 13:40:53.493296 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.493999 kubelet[2830]: E0114 13:40:53.493880 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.493999 kubelet[2830]: W0114 13:40:53.493911 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.493999 kubelet[2830]: E0114 13:40:53.493922 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.494306 kubelet[2830]: E0114 13:40:53.494206 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.494306 kubelet[2830]: W0114 13:40:53.494287 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.494306 kubelet[2830]: E0114 13:40:53.494298 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.494739 kubelet[2830]: E0114 13:40:53.494530 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.494739 kubelet[2830]: W0114 13:40:53.494547 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.494739 kubelet[2830]: E0114 13:40:53.494562 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.495572 kubelet[2830]: E0114 13:40:53.495528 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.495572 kubelet[2830]: W0114 13:40:53.495563 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.495572 kubelet[2830]: E0114 13:40:53.495574 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.495962 kubelet[2830]: E0114 13:40:53.495911 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.495962 kubelet[2830]: W0114 13:40:53.495944 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.495962 kubelet[2830]: E0114 13:40:53.495961 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496251 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.497805 kubelet[2830]: W0114 13:40:53.496268 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496283 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496490 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.497805 kubelet[2830]: W0114 13:40:53.496499 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496507 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496845 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.497805 kubelet[2830]: W0114 13:40:53.496854 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.496863 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.497805 kubelet[2830]: E0114 13:40:53.497275 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.499087 kubelet[2830]: W0114 13:40:53.497285 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.499087 kubelet[2830]: E0114 13:40:53.497294 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.499087 kubelet[2830]: I0114 13:40:53.497318 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70f39dd7-0818-4b8e-a6d8-f99942268a1b-kubelet-dir\") pod \"csi-node-driver-5l6pz\" (UID: \"70f39dd7-0818-4b8e-a6d8-f99942268a1b\") " pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:40:53.499087 kubelet[2830]: E0114 13:40:53.497771 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.499087 kubelet[2830]: W0114 13:40:53.497784 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.499087 kubelet[2830]: E0114 13:40:53.497827 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.499087 kubelet[2830]: I0114 13:40:53.497844 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70f39dd7-0818-4b8e-a6d8-f99942268a1b-varrun\") pod \"csi-node-driver-5l6pz\" (UID: \"70f39dd7-0818-4b8e-a6d8-f99942268a1b\") " pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:40:53.499087 kubelet[2830]: E0114 13:40:53.498348 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.499365 kubelet[2830]: W0114 13:40:53.498359 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.499365 kubelet[2830]: E0114 13:40:53.498437 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.499365 kubelet[2830]: E0114 13:40:53.499245 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.499365 kubelet[2830]: W0114 13:40:53.499256 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.500524 kubelet[2830]: E0114 13:40:53.499606 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.500524 kubelet[2830]: E0114 13:40:53.500355 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.500524 kubelet[2830]: W0114 13:40:53.500367 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.500524 kubelet[2830]: E0114 13:40:53.500377 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.501616 kubelet[2830]: E0114 13:40:53.501532 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.501616 kubelet[2830]: W0114 13:40:53.501552 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.502024 kubelet[2830]: E0114 13:40:53.501836 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.502439 kubelet[2830]: E0114 13:40:53.502425 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.502526 kubelet[2830]: W0114 13:40:53.502507 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.502611 kubelet[2830]: E0114 13:40:53.502592 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.503498 kubelet[2830]: I0114 13:40:53.503315 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70f39dd7-0818-4b8e-a6d8-f99942268a1b-socket-dir\") pod \"csi-node-driver-5l6pz\" (UID: \"70f39dd7-0818-4b8e-a6d8-f99942268a1b\") " pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:40:53.504054 kubelet[2830]: E0114 13:40:53.503970 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.504216 kubelet[2830]: W0114 13:40:53.504134 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.504216 kubelet[2830]: E0114 13:40:53.504153 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.504829 kubelet[2830]: E0114 13:40:53.504616 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.504898 kubelet[2830]: W0114 13:40:53.504883 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.505098 kubelet[2830]: E0114 13:40:53.505075 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.505444 kubelet[2830]: E0114 13:40:53.505430 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.505505 kubelet[2830]: W0114 13:40:53.505493 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.505567 kubelet[2830]: E0114 13:40:53.505554 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.505646 kubelet[2830]: I0114 13:40:53.505624 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh72h\" (UniqueName: \"kubernetes.io/projected/70f39dd7-0818-4b8e-a6d8-f99942268a1b-kube-api-access-jh72h\") pod \"csi-node-driver-5l6pz\" (UID: \"70f39dd7-0818-4b8e-a6d8-f99942268a1b\") " pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:40:53.506612 kubelet[2830]: E0114 13:40:53.506551 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.506612 kubelet[2830]: W0114 13:40:53.506583 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.507483 kubelet[2830]: E0114 13:40:53.507335 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.507483 kubelet[2830]: W0114 13:40:53.507371 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.507483 kubelet[2830]: E0114 13:40:53.507405 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.507483 kubelet[2830]: E0114 13:40:53.507431 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.507483 kubelet[2830]: I0114 13:40:53.507455 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70f39dd7-0818-4b8e-a6d8-f99942268a1b-registration-dir\") pod \"csi-node-driver-5l6pz\" (UID: \"70f39dd7-0818-4b8e-a6d8-f99942268a1b\") " pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:40:53.508350 kubelet[2830]: E0114 13:40:53.508329 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.508561 kubelet[2830]: W0114 13:40:53.508435 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.508561 kubelet[2830]: E0114 13:40:53.508454 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.509152 kubelet[2830]: E0114 13:40:53.509131 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.509420 kubelet[2830]: W0114 13:40:53.509221 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.509420 kubelet[2830]: E0114 13:40:53.509244 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.510204 kubelet[2830]: E0114 13:40:53.509756 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.510278 kubelet[2830]: W0114 13:40:53.510261 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.510325 kubelet[2830]: E0114 13:40:53.510314 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.538426 kubelet[2830]: E0114 13:40:53.538377 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:53.538975 containerd[1601]: time="2026-01-14T13:40:53.538884338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnjw8,Uid:d539aa2e-c430-4d64-a59e-4b30d8ffe1c2,Namespace:calico-system,Attempt:0,}" Jan 14 13:40:53.551322 containerd[1601]: time="2026-01-14T13:40:53.551137263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58998775d-7mnwx,Uid:b51f7f11-dc93-4f27-9025-c29b15252b60,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0\"" Jan 14 13:40:53.552420 kubelet[2830]: E0114 13:40:53.552362 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:53.553956 containerd[1601]: time="2026-01-14T13:40:53.553846997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 13:40:53.580363 containerd[1601]: time="2026-01-14T13:40:53.580291983Z" level=info msg="connecting to shim a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a" address="unix:///run/containerd/s/b62bf999b925cff1493dc029c9a330276dfc224b26d90aae956907991cb7ef45" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:40:53.611074 kubelet[2830]: E0114 13:40:53.611044 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.611074 kubelet[2830]: W0114 13:40:53.611061 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.611224 kubelet[2830]: E0114 13:40:53.611115 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.611776 kubelet[2830]: E0114 13:40:53.611639 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.611776 kubelet[2830]: W0114 13:40:53.611707 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.611776 kubelet[2830]: E0114 13:40:53.611777 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.612513 kubelet[2830]: E0114 13:40:53.612469 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.612553 kubelet[2830]: W0114 13:40:53.612536 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.612576 kubelet[2830]: E0114 13:40:53.612558 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.613618 kubelet[2830]: E0114 13:40:53.613571 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.614041 kubelet[2830]: W0114 13:40:53.614013 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.614126 kubelet[2830]: E0114 13:40:53.614100 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.614601 kubelet[2830]: E0114 13:40:53.614574 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.614841 kubelet[2830]: W0114 13:40:53.614759 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.615218 kubelet[2830]: E0114 13:40:53.615198 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.627144 kubelet[2830]: E0114 13:40:53.627009 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.627144 kubelet[2830]: W0114 13:40:53.627103 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.627641 kubelet[2830]: E0114 13:40:53.627598 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.627641 kubelet[2830]: W0114 13:40:53.627625 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.628046 kubelet[2830]: E0114 13:40:53.627983 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.628107 kubelet[2830]: W0114 13:40:53.628078 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.628134 kubelet[2830]: E0114 13:40:53.628106 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.629110 kubelet[2830]: E0114 13:40:53.629063 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.630844 kubelet[2830]: E0114 13:40:53.630764 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.630933 kubelet[2830]: E0114 13:40:53.630885 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.630933 kubelet[2830]: W0114 13:40:53.630902 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.630933 kubelet[2830]: E0114 13:40:53.630921 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.631557 kubelet[2830]: E0114 13:40:53.631502 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.631926 kubelet[2830]: W0114 13:40:53.631898 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.632010 kubelet[2830]: E0114 13:40:53.631934 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.632999 kubelet[2830]: E0114 13:40:53.632653 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.633107 kubelet[2830]: W0114 13:40:53.633080 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.633827 kubelet[2830]: E0114 13:40:53.633657 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.633827 kubelet[2830]: W0114 13:40:53.633810 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.633886 systemd[1]: Started cri-containerd-a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a.scope - libcontainer container a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a. Jan 14 13:40:53.634476 kubelet[2830]: E0114 13:40:53.634453 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.634476 kubelet[2830]: W0114 13:40:53.634466 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.635223 kubelet[2830]: E0114 13:40:53.635192 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.635266 kubelet[2830]: E0114 13:40:53.635232 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.635266 kubelet[2830]: E0114 13:40:53.635248 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.636302 kubelet[2830]: E0114 13:40:53.636238 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.636302 kubelet[2830]: W0114 13:40:53.636254 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.636302 kubelet[2830]: E0114 13:40:53.636266 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.638063 kubelet[2830]: E0114 13:40:53.638008 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.639486 kubelet[2830]: W0114 13:40:53.639440 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.639816 kubelet[2830]: E0114 13:40:53.639779 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.641604 kubelet[2830]: E0114 13:40:53.641536 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.641777 kubelet[2830]: W0114 13:40:53.641748 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.642157 kubelet[2830]: E0114 13:40:53.642023 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.642641 kubelet[2830]: E0114 13:40:53.642609 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.642836 kubelet[2830]: W0114 13:40:53.642805 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.643065 kubelet[2830]: E0114 13:40:53.643045 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.643567 kubelet[2830]: E0114 13:40:53.643463 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.643567 kubelet[2830]: W0114 13:40:53.643548 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.643787 kubelet[2830]: E0114 13:40:53.643755 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.644091 kubelet[2830]: E0114 13:40:53.644046 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.644160 kubelet[2830]: W0114 13:40:53.644136 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.644234 kubelet[2830]: E0114 13:40:53.644207 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.644574 kubelet[2830]: E0114 13:40:53.644534 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.644574 kubelet[2830]: W0114 13:40:53.644558 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.644834 kubelet[2830]: E0114 13:40:53.644744 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.645291 kubelet[2830]: E0114 13:40:53.645174 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.645291 kubelet[2830]: W0114 13:40:53.645200 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.645413 kubelet[2830]: E0114 13:40:53.645310 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.645537 kubelet[2830]: E0114 13:40:53.645509 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.645816 kubelet[2830]: W0114 13:40:53.645537 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.645816 kubelet[2830]: E0114 13:40:53.645588 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.645936 kubelet[2830]: E0114 13:40:53.645848 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.645936 kubelet[2830]: W0114 13:40:53.645857 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.646055 kubelet[2830]: E0114 13:40:53.645971 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.646405 kubelet[2830]: E0114 13:40:53.646291 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.646405 kubelet[2830]: W0114 13:40:53.646402 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.646754 kubelet[2830]: E0114 13:40:53.646480 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.646827 kubelet[2830]: E0114 13:40:53.646819 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.647214 kubelet[2830]: W0114 13:40:53.646830 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.647214 kubelet[2830]: E0114 13:40:53.646905 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.647214 kubelet[2830]: E0114 13:40:53.647150 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:53.647214 kubelet[2830]: W0114 13:40:53.647160 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:53.647214 kubelet[2830]: E0114 13:40:53.647169 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:53.651000 audit: BPF prog-id=154 op=LOAD Jan 14 13:40:53.652000 audit: BPF prog-id=155 op=LOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=155 op=UNLOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=156 op=LOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=157 op=LOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=157 op=UNLOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=156 op=UNLOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.652000 audit: BPF prog-id=158 op=LOAD Jan 14 13:40:53.652000 audit[3375]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:53.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135636332306363333662343764613437663031313266623930356164 Jan 14 13:40:53.690637 containerd[1601]: time="2026-01-14T13:40:53.690479497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnjw8,Uid:d539aa2e-c430-4d64-a59e-4b30d8ffe1c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\"" Jan 14 13:40:53.692271 kubelet[2830]: E0114 13:40:53.692249 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:56.144868 kubelet[2830]: E0114 13:40:56.122283 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:40:57.738294 containerd[1601]: time="2026-01-14T13:40:57.737318508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:57.760796 containerd[1601]: time="2026-01-14T13:40:57.751653520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 14 13:40:57.761549 containerd[1601]: time="2026-01-14T13:40:57.761491170Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:57.765755 containerd[1601]: time="2026-01-14T13:40:57.764287089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:40:57.765910 containerd[1601]: time="2026-01-14T13:40:57.765876907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.21186382s" Jan 14 13:40:57.766093 containerd[1601]: time="2026-01-14T13:40:57.766062875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 13:40:57.771869 containerd[1601]: time="2026-01-14T13:40:57.771800151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 13:40:57.789476 containerd[1601]: time="2026-01-14T13:40:57.789428659Z" level=info msg="CreateContainer within sandbox \"6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 13:40:57.812769 containerd[1601]: time="2026-01-14T13:40:57.810580190Z" level=info msg="Container 56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:40:57.825317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104667130.mount: Deactivated successfully. Jan 14 13:40:57.882258 containerd[1601]: time="2026-01-14T13:40:57.882151399Z" level=info msg="CreateContainer within sandbox \"6a63888e37f1102e5b99d4f5b6cbf4a283b71cd18a504b100ffc78e1953d04e0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a\"" Jan 14 13:40:57.883328 containerd[1601]: time="2026-01-14T13:40:57.883302896Z" level=info msg="StartContainer for \"56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a\"" Jan 14 13:40:57.890915 containerd[1601]: time="2026-01-14T13:40:57.890840910Z" level=info msg="connecting to shim 56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a" address="unix:///run/containerd/s/92318cb2d610891bebadbae64f45d4d15898ba3b12a81a94d50449bfb4daf7af" protocol=ttrpc version=3 Jan 14 13:40:58.074251 systemd[1]: Started cri-containerd-56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a.scope - libcontainer container 56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a. Jan 14 13:40:58.184000 audit: BPF prog-id=159 op=LOAD Jan 14 13:40:58.190010 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 14 13:40:58.198865 kernel: audit: type=1334 audit(1768398058.184:539): prog-id=159 op=LOAD Jan 14 13:40:58.198960 kernel: audit: type=1334 audit(1768398058.184:540): prog-id=160 op=LOAD Jan 14 13:40:58.198988 kernel: audit: type=1300 audit(1768398058.184:540): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: BPF prog-id=160 op=LOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: BPF prog-id=160 op=UNLOAD Jan 14 13:40:58.225791 kernel: audit: type=1327 audit(1768398058.184:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.225960 kernel: audit: type=1334 audit(1768398058.184:541): prog-id=160 op=UNLOAD Jan 14 13:40:58.226079 kernel: audit: type=1300 audit(1768398058.184:541): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.273812 kernel: audit: type=1327 audit(1768398058.184:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: BPF prog-id=161 op=LOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.302895 kernel: audit: type=1334 audit(1768398058.184:542): prog-id=161 op=LOAD Jan 14 13:40:58.304069 kernel: audit: type=1300 audit(1768398058.184:542): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.304108 kernel: audit: type=1327 audit(1768398058.184:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.308125 kubelet[2830]: E0114 13:40:58.308006 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:40:58.184000 audit: BPF prog-id=162 op=LOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: BPF prog-id=162 op=UNLOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: BPF prog-id=161 op=UNLOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.184000 audit: BPF prog-id=163 op=LOAD Jan 14 13:40:58.184000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3253 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:40:58.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536633430663437393061346261393433646532383839356239323665 Jan 14 13:40:58.396630 containerd[1601]: time="2026-01-14T13:40:58.396490698Z" level=info msg="StartContainer for \"56c40f4790a4ba943de28895b926e177899841d7a450fa43eee56ec48a25e39a\" returns successfully" Jan 14 13:40:58.964612 kubelet[2830]: E0114 13:40:58.964393 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:40:59.186465 kubelet[2830]: E0114 13:40:59.185878 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.186465 kubelet[2830]: W0114 13:40:59.186000 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.186465 kubelet[2830]: E0114 13:40:59.186125 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.189712 kubelet[2830]: E0114 13:40:59.188098 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.189712 kubelet[2830]: W0114 13:40:59.188115 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.189712 kubelet[2830]: E0114 13:40:59.188133 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.189712 kubelet[2830]: E0114 13:40:59.188490 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.189712 kubelet[2830]: W0114 13:40:59.188510 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.189712 kubelet[2830]: E0114 13:40:59.188568 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.190864 kubelet[2830]: E0114 13:40:59.190812 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.190864 kubelet[2830]: W0114 13:40:59.190851 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.190980 kubelet[2830]: E0114 13:40:59.190873 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.194813 kubelet[2830]: E0114 13:40:59.194766 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.194894 kubelet[2830]: W0114 13:40:59.194840 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.194894 kubelet[2830]: E0114 13:40:59.194854 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.195322 kubelet[2830]: E0114 13:40:59.195246 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.195322 kubelet[2830]: W0114 13:40:59.195261 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.195322 kubelet[2830]: E0114 13:40:59.195273 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.291069 kubelet[2830]: E0114 13:40:59.281115 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.291069 kubelet[2830]: W0114 13:40:59.281249 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.291069 kubelet[2830]: E0114 13:40:59.281363 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:40:59.838643 kubelet[2830]: E0114 13:40:59.299930 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:40:59.838643 kubelet[2830]: W0114 13:40:59.300059 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:40:59.838643 kubelet[2830]: E0114 13:40:59.300251 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.311648 kubelet[2830]: E0114 13:41:01.311124 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.311648 kubelet[2830]: W0114 13:41:01.311306 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.313890 kubelet[2830]: E0114 13:41:01.311770 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.323460 kubelet[2830]: E0114 13:41:01.323362 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.323460 kubelet[2830]: W0114 13:41:01.323400 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.323460 kubelet[2830]: E0114 13:41:01.323422 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.326019 kubelet[2830]: E0114 13:41:01.325953 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.326019 kubelet[2830]: W0114 13:41:01.325995 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.326019 kubelet[2830]: E0114 13:41:01.326016 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.329230 kubelet[2830]: E0114 13:41:01.327859 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.329230 kubelet[2830]: W0114 13:41:01.327881 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.329230 kubelet[2830]: E0114 13:41:01.327899 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.329230 kubelet[2830]: E0114 13:41:01.328891 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.329230 kubelet[2830]: W0114 13:41:01.328907 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.329230 kubelet[2830]: E0114 13:41:01.329016 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.330924 kubelet[2830]: E0114 13:41:01.330898 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.331445 kubelet[2830]: W0114 13:41:01.331284 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.331445 kubelet[2830]: E0114 13:41:01.331304 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.334342 kubelet[2830]: E0114 13:41:01.332852 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.334342 kubelet[2830]: W0114 13:41:01.333165 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.334342 kubelet[2830]: E0114 13:41:01.333182 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.338759 kubelet[2830]: E0114 13:41:01.337831 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.338759 kubelet[2830]: W0114 13:41:01.337852 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.338759 kubelet[2830]: E0114 13:41:01.337869 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.339896 kubelet[2830]: E0114 13:41:01.339873 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.340021 kubelet[2830]: W0114 13:41:01.340001 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.340215 kubelet[2830]: E0114 13:41:01.340193 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.341847 kubelet[2830]: E0114 13:41:01.341826 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.341937 kubelet[2830]: W0114 13:41:01.341919 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.342040 kubelet[2830]: E0114 13:41:01.342021 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.343881 kubelet[2830]: E0114 13:41:01.343860 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.343974 kubelet[2830]: W0114 13:41:01.343956 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.344074 kubelet[2830]: E0114 13:41:01.344056 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.344912 kubelet[2830]: E0114 13:41:01.344776 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.345791 kubelet[2830]: W0114 13:41:01.345247 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.345791 kubelet[2830]: E0114 13:41:01.345273 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.347860 kubelet[2830]: E0114 13:41:01.346645 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.347860 kubelet[2830]: W0114 13:41:01.347837 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.348007 kubelet[2830]: E0114 13:41:01.347855 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.351187 kubelet[2830]: E0114 13:41:01.350492 2830 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.072s" Jan 14 13:41:01.352644 kubelet[2830]: E0114 13:41:01.352090 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.353283 kubelet[2830]: W0114 13:41:01.353093 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.353283 kubelet[2830]: I0114 13:41:01.353160 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:41:01.357749 kubelet[2830]: E0114 13:41:01.355000 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.361057 kubelet[2830]: E0114 13:41:01.359081 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:01.361057 kubelet[2830]: E0114 13:41:01.359259 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:01.364912 kubelet[2830]: E0114 13:41:01.363000 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.364912 kubelet[2830]: W0114 13:41:01.363127 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.366354 kubelet[2830]: E0114 13:41:01.366296 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.366494 kubelet[2830]: W0114 13:41:01.366421 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.367863 kubelet[2830]: E0114 13:41:01.367809 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.367863 kubelet[2830]: W0114 13:41:01.367851 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.367998 kubelet[2830]: E0114 13:41:01.367873 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.368090 kubelet[2830]: E0114 13:41:01.368045 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.369134 kubelet[2830]: E0114 13:41:01.369000 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.369252 kubelet[2830]: W0114 13:41:01.369126 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.369952 kubelet[2830]: E0114 13:41:01.369331 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.369952 kubelet[2830]: E0114 13:41:01.369829 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.372752 kubelet[2830]: E0114 13:41:01.372498 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.372752 kubelet[2830]: W0114 13:41:01.372536 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.372752 kubelet[2830]: E0114 13:41:01.372549 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.373452 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.376007 kubelet[2830]: W0114 13:41:01.373467 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.373477 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.374293 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.376007 kubelet[2830]: W0114 13:41:01.374303 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.374520 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.375635 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.376007 kubelet[2830]: W0114 13:41:01.375646 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.375660 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.376007 kubelet[2830]: E0114 13:41:01.375970 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.376498 kubelet[2830]: W0114 13:41:01.375979 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.376498 kubelet[2830]: E0114 13:41:01.375988 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.376498 kubelet[2830]: E0114 13:41:01.376277 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.376498 kubelet[2830]: W0114 13:41:01.376286 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.376498 kubelet[2830]: E0114 13:41:01.376294 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.378629 kubelet[2830]: E0114 13:41:01.378567 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.378629 kubelet[2830]: W0114 13:41:01.378608 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.378629 kubelet[2830]: E0114 13:41:01.378619 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.489622 kubelet[2830]: E0114 13:41:01.489273 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.489622 kubelet[2830]: W0114 13:41:01.489316 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.489622 kubelet[2830]: E0114 13:41:01.489351 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.491934 kubelet[2830]: E0114 13:41:01.491364 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.491934 kubelet[2830]: W0114 13:41:01.491773 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.491934 kubelet[2830]: E0114 13:41:01.491799 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.495761 kubelet[2830]: E0114 13:41:01.493565 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.495761 kubelet[2830]: W0114 13:41:01.493603 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.495761 kubelet[2830]: E0114 13:41:01.493617 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.496259 kubelet[2830]: E0114 13:41:01.496213 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.496259 kubelet[2830]: W0114 13:41:01.496247 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.496259 kubelet[2830]: E0114 13:41:01.496260 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.496656 kubelet[2830]: E0114 13:41:01.496640 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.497200 kubelet[2830]: W0114 13:41:01.497185 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.497531 kubelet[2830]: E0114 13:41:01.497515 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.498163 kubelet[2830]: E0114 13:41:01.498144 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.500298 kubelet[2830]: W0114 13:41:01.500277 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.500389 kubelet[2830]: E0114 13:41:01.500372 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.502290 kubelet[2830]: E0114 13:41:01.502269 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.502606 kubelet[2830]: W0114 13:41:01.502586 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.502804 kubelet[2830]: E0114 13:41:01.502779 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.503430 kubelet[2830]: E0114 13:41:01.503413 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.504023 kubelet[2830]: W0114 13:41:01.504005 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.504575 kubelet[2830]: E0114 13:41:01.504518 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.507885 kubelet[2830]: E0114 13:41:01.507769 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.507885 kubelet[2830]: W0114 13:41:01.507787 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.507885 kubelet[2830]: E0114 13:41:01.507800 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.511337 kubelet[2830]: E0114 13:41:01.510924 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.511337 kubelet[2830]: W0114 13:41:01.510941 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.511337 kubelet[2830]: E0114 13:41:01.510953 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.513320 kubelet[2830]: E0114 13:41:01.513214 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.514235 kubelet[2830]: W0114 13:41:01.513952 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.514235 kubelet[2830]: E0114 13:41:01.513971 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.514981 kubelet[2830]: E0114 13:41:01.514966 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.515123 kubelet[2830]: W0114 13:41:01.515107 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.515205 kubelet[2830]: E0114 13:41:01.515193 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.518905 kubelet[2830]: E0114 13:41:01.518816 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.518905 kubelet[2830]: W0114 13:41:01.518831 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.518905 kubelet[2830]: E0114 13:41:01.518844 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.520328 kubelet[2830]: E0114 13:41:01.520313 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.520787 kubelet[2830]: W0114 13:41:01.520630 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.520787 kubelet[2830]: E0114 13:41:01.520650 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.523003 kubelet[2830]: E0114 13:41:01.522839 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.523003 kubelet[2830]: W0114 13:41:01.522856 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.523003 kubelet[2830]: E0114 13:41:01.522868 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.546754 kubelet[2830]: E0114 13:41:01.545557 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.547304 kubelet[2830]: W0114 13:41:01.547271 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.547588 kubelet[2830]: E0114 13:41:01.547516 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.548377 kubelet[2830]: E0114 13:41:01.548357 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.548516 kubelet[2830]: W0114 13:41:01.548496 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.548706 kubelet[2830]: E0114 13:41:01.548636 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.549298 kubelet[2830]: E0114 13:41:01.549274 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.549407 kubelet[2830]: W0114 13:41:01.549386 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.549962 kubelet[2830]: E0114 13:41:01.549940 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.550420 kubelet[2830]: E0114 13:41:01.550402 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.550514 kubelet[2830]: W0114 13:41:01.550496 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.550653 kubelet[2830]: E0114 13:41:01.550633 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.551300 kubelet[2830]: E0114 13:41:01.551276 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.551412 kubelet[2830]: W0114 13:41:01.551390 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.551595 kubelet[2830]: E0114 13:41:01.551527 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.553000 kubelet[2830]: E0114 13:41:01.552920 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.553137 kubelet[2830]: W0114 13:41:01.553118 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.553375 kubelet[2830]: E0114 13:41:01.553308 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.554527 kubelet[2830]: E0114 13:41:01.554435 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.554901 kubelet[2830]: W0114 13:41:01.554765 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.555373 kubelet[2830]: E0114 13:41:01.555007 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.556330 kubelet[2830]: E0114 13:41:01.556235 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.556716 kubelet[2830]: W0114 13:41:01.556537 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.557070 kubelet[2830]: E0114 13:41:01.556974 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.558178 kubelet[2830]: E0114 13:41:01.558088 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.558461 kubelet[2830]: W0114 13:41:01.558303 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.558881 kubelet[2830]: E0114 13:41:01.558661 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.559908 kubelet[2830]: E0114 13:41:01.559884 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.560033 kubelet[2830]: W0114 13:41:01.559996 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.560364 kubelet[2830]: E0114 13:41:01.560344 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.560628 kubelet[2830]: E0114 13:41:01.560589 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.560628 kubelet[2830]: W0114 13:41:01.560606 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.561057 kubelet[2830]: E0114 13:41:01.561035 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.562396 kubelet[2830]: E0114 13:41:01.562202 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.562396 kubelet[2830]: W0114 13:41:01.562219 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.562542 kubelet[2830]: E0114 13:41:01.562522 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.563156 kubelet[2830]: E0114 13:41:01.563116 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.563156 kubelet[2830]: W0114 13:41:01.563136 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.563759 kubelet[2830]: E0114 13:41:01.563429 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.564405 kubelet[2830]: E0114 13:41:01.564384 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.564506 kubelet[2830]: W0114 13:41:01.564488 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.564649 kubelet[2830]: E0114 13:41:01.564631 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.565129 kubelet[2830]: E0114 13:41:01.565111 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.565268 kubelet[2830]: W0114 13:41:01.565249 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.565550 kubelet[2830]: E0114 13:41:01.565498 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.566106 kubelet[2830]: E0114 13:41:01.565961 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.566106 kubelet[2830]: W0114 13:41:01.565977 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.566106 kubelet[2830]: E0114 13:41:01.565998 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.567637 kubelet[2830]: E0114 13:41:01.567216 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.567637 kubelet[2830]: W0114 13:41:01.567235 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.567637 kubelet[2830]: E0114 13:41:01.567250 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.568968 kubelet[2830]: E0114 13:41:01.568947 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:41:01.569424 kubelet[2830]: W0114 13:41:01.569332 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:41:01.569424 kubelet[2830]: E0114 13:41:01.569355 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:41:01.606551 containerd[1601]: time="2026-01-14T13:41:01.606286260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:01.608827 containerd[1601]: time="2026-01-14T13:41:01.608536802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:01.613914 containerd[1601]: time="2026-01-14T13:41:01.613832049Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:01.617236 containerd[1601]: time="2026-01-14T13:41:01.617050731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:01.618900 containerd[1601]: time="2026-01-14T13:41:01.618527676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 3.846603754s" Jan 14 13:41:01.618900 containerd[1601]: time="2026-01-14T13:41:01.618761092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 13:41:01.628051 containerd[1601]: time="2026-01-14T13:41:01.627847610Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:41:01.742264 containerd[1601]: time="2026-01-14T13:41:01.742119269Z" level=info msg="Container 119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:41:01.773886 containerd[1601]: time="2026-01-14T13:41:01.773843623Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a\"" Jan 14 13:41:01.774912 containerd[1601]: time="2026-01-14T13:41:01.774880428Z" level=info msg="StartContainer for \"119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a\"" Jan 14 13:41:01.777312 containerd[1601]: time="2026-01-14T13:41:01.777287323Z" level=info msg="connecting to shim 119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a" address="unix:///run/containerd/s/b62bf999b925cff1493dc029c9a330276dfc224b26d90aae956907991cb7ef45" protocol=ttrpc version=3 Jan 14 13:41:01.828635 systemd[1]: Started cri-containerd-119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a.scope - libcontainer container 119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a. Jan 14 13:41:01.903000 audit: BPF prog-id=164 op=LOAD Jan 14 13:41:01.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131396130326531343235333639333538373861326634613134303831 Jan 14 13:41:01.903000 audit: BPF prog-id=165 op=LOAD Jan 14 13:41:01.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131396130326531343235333639333538373861326634613134303831 Jan 14 13:41:01.903000 audit: BPF prog-id=165 op=UNLOAD Jan 14 13:41:01.903000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131396130326531343235333639333538373861326634613134303831 Jan 14 13:41:01.903000 audit: BPF prog-id=164 op=UNLOAD Jan 14 13:41:01.903000 audit[3551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131396130326531343235333639333538373861326634613134303831 Jan 14 13:41:01.903000 audit: BPF prog-id=166 op=LOAD Jan 14 13:41:01.903000 audit[3551]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131396130326531343235333639333538373861326634613134303831 Jan 14 13:41:01.965933 containerd[1601]: time="2026-01-14T13:41:01.965612651Z" level=info msg="StartContainer for \"119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a\" returns successfully" Jan 14 13:41:01.981654 systemd[1]: cri-containerd-119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a.scope: Deactivated successfully. Jan 14 13:41:01.986463 containerd[1601]: time="2026-01-14T13:41:01.986379545Z" level=info msg="received container exit event container_id:\"119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a\" id:\"119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a\" pid:3565 exited_at:{seconds:1768398061 nanos:984860498}" Jan 14 13:41:01.989000 audit: BPF prog-id=166 op=UNLOAD Jan 14 13:41:02.023294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-119a02e142536935878a2f4a14081e4d05895e3d65aac0a79eb197e3a98bd29a-rootfs.mount: Deactivated successfully. Jan 14 13:41:02.364213 kubelet[2830]: E0114 13:41:02.363517 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:02.368275 containerd[1601]: time="2026-01-14T13:41:02.368157692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 13:41:02.393386 kubelet[2830]: I0114 13:41:02.392444 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58998775d-7mnwx" podStartSLOduration=5.173555261 podStartE2EDuration="9.391609255s" podCreationTimestamp="2026-01-14 13:40:53 +0000 UTC" firstStartedPulling="2026-01-14 13:40:53.553209131 +0000 UTC m=+20.613779568" lastFinishedPulling="2026-01-14 13:40:57.771263124 +0000 UTC m=+24.831833562" observedRunningTime="2026-01-14 13:41:01.303965858 +0000 UTC m=+28.364536305" watchObservedRunningTime="2026-01-14 13:41:02.391609255 +0000 UTC m=+29.452179702" Jan 14 13:41:03.249720 kubelet[2830]: E0114 13:41:03.249584 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:04.601230 containerd[1601]: time="2026-01-14T13:41:04.601038378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:04.602218 containerd[1601]: time="2026-01-14T13:41:04.602189622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 13:41:04.604289 containerd[1601]: time="2026-01-14T13:41:04.604211854Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:04.608057 containerd[1601]: time="2026-01-14T13:41:04.607871085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:04.609306 containerd[1601]: time="2026-01-14T13:41:04.609187477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.240958984s" Jan 14 13:41:04.609306 containerd[1601]: time="2026-01-14T13:41:04.609258830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 13:41:04.612437 containerd[1601]: time="2026-01-14T13:41:04.612397040Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:41:04.625178 containerd[1601]: time="2026-01-14T13:41:04.625077551Z" level=info msg="Container 23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:41:04.637450 containerd[1601]: time="2026-01-14T13:41:04.636584007Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54\"" Jan 14 13:41:04.638428 containerd[1601]: time="2026-01-14T13:41:04.638318726Z" level=info msg="StartContainer for \"23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54\"" Jan 14 13:41:04.641108 containerd[1601]: time="2026-01-14T13:41:04.641055796Z" level=info msg="connecting to shim 23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54" address="unix:///run/containerd/s/b62bf999b925cff1493dc029c9a330276dfc224b26d90aae956907991cb7ef45" protocol=ttrpc version=3 Jan 14 13:41:04.688926 systemd[1]: Started cri-containerd-23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54.scope - libcontainer container 23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54. Jan 14 13:41:04.788000 audit: BPF prog-id=167 op=LOAD Jan 14 13:41:04.792185 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 13:41:04.792384 kernel: audit: type=1334 audit(1768398064.788:553): prog-id=167 op=LOAD Jan 14 13:41:04.794797 kernel: audit: type=1300 audit(1768398064.788:553): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.788000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.806838 kernel: audit: type=1327 audit(1768398064.788:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: BPF prog-id=168 op=LOAD Jan 14 13:41:04.789000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.831122 kernel: audit: type=1334 audit(1768398064.789:554): prog-id=168 op=LOAD Jan 14 13:41:04.831334 kernel: audit: type=1300 audit(1768398064.789:554): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.845652 kernel: audit: type=1327 audit(1768398064.789:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: BPF prog-id=168 op=UNLOAD Jan 14 13:41:04.789000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.860457 kernel: audit: type=1334 audit(1768398064.789:555): prog-id=168 op=UNLOAD Jan 14 13:41:04.860969 kernel: audit: type=1300 audit(1768398064.789:555): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.861018 kernel: audit: type=1327 audit(1768398064.789:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: BPF prog-id=167 op=UNLOAD Jan 14 13:41:04.875344 kernel: audit: type=1334 audit(1768398064.789:556): prog-id=167 op=UNLOAD Jan 14 13:41:04.789000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.789000 audit: BPF prog-id=169 op=LOAD Jan 14 13:41:04.789000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:04.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616638303136353339643633616364653439666165623561343862 Jan 14 13:41:04.877589 containerd[1601]: time="2026-01-14T13:41:04.877558147Z" level=info msg="StartContainer for \"23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54\" returns successfully" Jan 14 13:41:05.249939 kubelet[2830]: E0114 13:41:05.249621 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:05.391169 kubelet[2830]: E0114 13:41:05.391115 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:06.286136 systemd[1]: cri-containerd-23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54.scope: Deactivated successfully. Jan 14 13:41:06.286781 systemd[1]: cri-containerd-23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54.scope: Consumed 1.489s CPU time, 178.5M memory peak, 4.4M read from disk, 171.3M written to disk. Jan 14 13:41:06.292123 containerd[1601]: time="2026-01-14T13:41:06.291880412Z" level=info msg="received container exit event container_id:\"23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54\" id:\"23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54\" pid:3624 exited_at:{seconds:1768398066 nanos:288371732}" Jan 14 13:41:06.292000 audit: BPF prog-id=169 op=UNLOAD Jan 14 13:41:06.338019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23af8016539d63acde49faeb5a48bfcf431175f8f8b06752e888863832ea4d54-rootfs.mount: Deactivated successfully. Jan 14 13:41:06.355408 kubelet[2830]: I0114 13:41:06.355246 2830 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 13:41:06.394145 kubelet[2830]: E0114 13:41:06.394082 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:06.424309 systemd[1]: Created slice kubepods-besteffort-pod4c4e216b_6b87_484b_9cae_6f0965d6396b.slice - libcontainer container kubepods-besteffort-pod4c4e216b_6b87_484b_9cae_6f0965d6396b.slice. Jan 14 13:41:06.436903 systemd[1]: Created slice kubepods-besteffort-poddb5c059b_487e_4ffe_80d9_50079eb46981.slice - libcontainer container kubepods-besteffort-poddb5c059b_487e_4ffe_80d9_50079eb46981.slice. Jan 14 13:41:06.446962 systemd[1]: Created slice kubepods-burstable-podb2d973ed_942e_4bef_9dc2_ff5f578f60a0.slice - libcontainer container kubepods-burstable-podb2d973ed_942e_4bef_9dc2_ff5f578f60a0.slice. Jan 14 13:41:06.458495 systemd[1]: Created slice kubepods-besteffort-podcf16b57f_d6ab_45f8_acf0_0156a11bd169.slice - libcontainer container kubepods-besteffort-podcf16b57f_d6ab_45f8_acf0_0156a11bd169.slice. Jan 14 13:41:06.467764 systemd[1]: Created slice kubepods-burstable-poda23cfe8e_2547_4e36_9f35_c3f56eeb09a7.slice - libcontainer container kubepods-burstable-poda23cfe8e_2547_4e36_9f35_c3f56eeb09a7.slice. Jan 14 13:41:06.478138 systemd[1]: Created slice kubepods-besteffort-podd3d1f008_c373_460c_bb65_2604d6d39838.slice - libcontainer container kubepods-besteffort-podd3d1f008_c373_460c_bb65_2604d6d39838.slice. Jan 14 13:41:06.484859 systemd[1]: Created slice kubepods-besteffort-pod8d93874e_aa0f_4caa_9b42_ab659ab91c41.slice - libcontainer container kubepods-besteffort-pod8d93874e_aa0f_4caa_9b42_ab659ab91c41.slice. Jan 14 13:41:06.489235 kubelet[2830]: I0114 13:41:06.489193 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjdc\" (UniqueName: \"kubernetes.io/projected/db5c059b-487e-4ffe-80d9-50079eb46981-kube-api-access-qsjdc\") pod \"whisker-755cbcdd74-2p5hz\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " pod="calico-system/whisker-755cbcdd74-2p5hz" Jan 14 13:41:06.489507 kubelet[2830]: I0114 13:41:06.489273 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nclbn\" (UniqueName: \"kubernetes.io/projected/a23cfe8e-2547-4e36-9f35-c3f56eeb09a7-kube-api-access-nclbn\") pod \"coredns-668d6bf9bc-h76mq\" (UID: \"a23cfe8e-2547-4e36-9f35-c3f56eeb09a7\") " pod="kube-system/coredns-668d6bf9bc-h76mq" Jan 14 13:41:06.489507 kubelet[2830]: I0114 13:41:06.489311 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-ca-bundle\") pod \"whisker-755cbcdd74-2p5hz\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " pod="calico-system/whisker-755cbcdd74-2p5hz" Jan 14 13:41:06.489507 kubelet[2830]: I0114 13:41:06.489389 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d93874e-aa0f-4caa-9b42-ab659ab91c41-calico-apiserver-certs\") pod \"calico-apiserver-7d8b4854-7h7cg\" (UID: \"8d93874e-aa0f-4caa-9b42-ab659ab91c41\") " pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" Jan 14 13:41:06.489507 kubelet[2830]: I0114 13:41:06.489422 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrk54\" (UniqueName: \"kubernetes.io/projected/8d93874e-aa0f-4caa-9b42-ab659ab91c41-kube-api-access-wrk54\") pod \"calico-apiserver-7d8b4854-7h7cg\" (UID: \"8d93874e-aa0f-4caa-9b42-ab659ab91c41\") " pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" Jan 14 13:41:06.489507 kubelet[2830]: I0114 13:41:06.489458 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23cfe8e-2547-4e36-9f35-c3f56eeb09a7-config-volume\") pod \"coredns-668d6bf9bc-h76mq\" (UID: \"a23cfe8e-2547-4e36-9f35-c3f56eeb09a7\") " pod="kube-system/coredns-668d6bf9bc-h76mq" Jan 14 13:41:06.490494 kubelet[2830]: I0114 13:41:06.489485 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-backend-key-pair\") pod \"whisker-755cbcdd74-2p5hz\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " pod="calico-system/whisker-755cbcdd74-2p5hz" Jan 14 13:41:06.591162 kubelet[2830]: I0114 13:41:06.590956 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3d1f008-c373-460c-bb65-2604d6d39838-goldmane-ca-bundle\") pod \"goldmane-666569f655-26r2w\" (UID: \"d3d1f008-c373-460c-bb65-2604d6d39838\") " pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:06.591162 kubelet[2830]: I0114 13:41:06.591077 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dg8\" (UniqueName: \"kubernetes.io/projected/d3d1f008-c373-460c-bb65-2604d6d39838-kube-api-access-62dg8\") pod \"goldmane-666569f655-26r2w\" (UID: \"d3d1f008-c373-460c-bb65-2604d6d39838\") " pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:06.591162 kubelet[2830]: I0114 13:41:06.591116 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9cpj\" (UniqueName: \"kubernetes.io/projected/cf16b57f-d6ab-45f8-acf0-0156a11bd169-kube-api-access-q9cpj\") pod \"calico-apiserver-7d8b4854-wgpjs\" (UID: \"cf16b57f-d6ab-45f8-acf0-0156a11bd169\") " pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" Jan 14 13:41:06.591162 kubelet[2830]: I0114 13:41:06.591153 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf16b57f-d6ab-45f8-acf0-0156a11bd169-calico-apiserver-certs\") pod \"calico-apiserver-7d8b4854-wgpjs\" (UID: \"cf16b57f-d6ab-45f8-acf0-0156a11bd169\") " pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" Jan 14 13:41:06.591410 kubelet[2830]: I0114 13:41:06.591186 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c4e216b-6b87-484b-9cae-6f0965d6396b-tigera-ca-bundle\") pod \"calico-kube-controllers-5cd75cc4bb-k94tm\" (UID: \"4c4e216b-6b87-484b-9cae-6f0965d6396b\") " pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" Jan 14 13:41:06.591410 kubelet[2830]: I0114 13:41:06.591271 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnr2\" (UniqueName: \"kubernetes.io/projected/b2d973ed-942e-4bef-9dc2-ff5f578f60a0-kube-api-access-nfnr2\") pod \"coredns-668d6bf9bc-xtrzq\" (UID: \"b2d973ed-942e-4bef-9dc2-ff5f578f60a0\") " pod="kube-system/coredns-668d6bf9bc-xtrzq" Jan 14 13:41:06.591410 kubelet[2830]: I0114 13:41:06.591316 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdt9x\" (UniqueName: \"kubernetes.io/projected/4c4e216b-6b87-484b-9cae-6f0965d6396b-kube-api-access-pdt9x\") pod \"calico-kube-controllers-5cd75cc4bb-k94tm\" (UID: \"4c4e216b-6b87-484b-9cae-6f0965d6396b\") " pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" Jan 14 13:41:06.591410 kubelet[2830]: I0114 13:41:06.591371 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d1f008-c373-460c-bb65-2604d6d39838-config\") pod \"goldmane-666569f655-26r2w\" (UID: \"d3d1f008-c373-460c-bb65-2604d6d39838\") " pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:06.591517 kubelet[2830]: I0114 13:41:06.591419 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2d973ed-942e-4bef-9dc2-ff5f578f60a0-config-volume\") pod \"coredns-668d6bf9bc-xtrzq\" (UID: \"b2d973ed-942e-4bef-9dc2-ff5f578f60a0\") " pod="kube-system/coredns-668d6bf9bc-xtrzq" Jan 14 13:41:06.591517 kubelet[2830]: I0114 13:41:06.591447 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d3d1f008-c373-460c-bb65-2604d6d39838-goldmane-key-pair\") pod \"goldmane-666569f655-26r2w\" (UID: \"d3d1f008-c373-460c-bb65-2604d6d39838\") " pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:06.732236 containerd[1601]: time="2026-01-14T13:41:06.732164098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd75cc4bb-k94tm,Uid:4c4e216b-6b87-484b-9cae-6f0965d6396b,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:06.743139 containerd[1601]: time="2026-01-14T13:41:06.742918741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755cbcdd74-2p5hz,Uid:db5c059b-487e-4ffe-80d9-50079eb46981,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:06.754010 kubelet[2830]: E0114 13:41:06.753943 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:06.755098 containerd[1601]: time="2026-01-14T13:41:06.754859465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtrzq,Uid:b2d973ed-942e-4bef-9dc2-ff5f578f60a0,Namespace:kube-system,Attempt:0,}" Jan 14 13:41:06.765115 containerd[1601]: time="2026-01-14T13:41:06.764649746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-wgpjs,Uid:cf16b57f-d6ab-45f8-acf0-0156a11bd169,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:41:06.777247 kubelet[2830]: E0114 13:41:06.776170 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:06.778537 containerd[1601]: time="2026-01-14T13:41:06.778239190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h76mq,Uid:a23cfe8e-2547-4e36-9f35-c3f56eeb09a7,Namespace:kube-system,Attempt:0,}" Jan 14 13:41:06.782658 containerd[1601]: time="2026-01-14T13:41:06.782320896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26r2w,Uid:d3d1f008-c373-460c-bb65-2604d6d39838,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:06.794995 containerd[1601]: time="2026-01-14T13:41:06.794943960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-7h7cg,Uid:8d93874e-aa0f-4caa-9b42-ab659ab91c41,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:41:06.985556 containerd[1601]: time="2026-01-14T13:41:06.985221224Z" level=error msg="Failed to destroy network for sandbox \"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:06.992357 containerd[1601]: time="2026-01-14T13:41:06.992276746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h76mq,Uid:a23cfe8e-2547-4e36-9f35-c3f56eeb09a7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:06.995266 kubelet[2830]: E0114 13:41:06.993751 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:06.995266 kubelet[2830]: E0114 13:41:06.993930 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h76mq" Jan 14 13:41:06.995266 kubelet[2830]: E0114 13:41:06.994000 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h76mq" Jan 14 13:41:06.995460 kubelet[2830]: E0114 13:41:06.994102 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h76mq_kube-system(a23cfe8e-2547-4e36-9f35-c3f56eeb09a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h76mq_kube-system(a23cfe8e-2547-4e36-9f35-c3f56eeb09a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dfa184ae655f75812344d1af8a173be891d99fbac025b0d4bab301edf74c870\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h76mq" podUID="a23cfe8e-2547-4e36-9f35-c3f56eeb09a7" Jan 14 13:41:06.997349 containerd[1601]: time="2026-01-14T13:41:06.996809732Z" level=error msg="Failed to destroy network for sandbox \"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.001561 containerd[1601]: time="2026-01-14T13:41:07.001505108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26r2w,Uid:d3d1f008-c373-460c-bb65-2604d6d39838,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.002315 kubelet[2830]: E0114 13:41:07.002282 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.002573 kubelet[2830]: E0114 13:41:07.002461 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:07.002573 kubelet[2830]: E0114 13:41:07.002545 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-26r2w" Jan 14 13:41:07.003901 kubelet[2830]: E0114 13:41:07.002816 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e4f1e951c408b276ff42a5ed181c6ed0c1e3b56bcabd53da588cd4cdd67a70d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:07.007862 containerd[1601]: time="2026-01-14T13:41:07.007834025Z" level=error msg="Failed to destroy network for sandbox \"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.010144 containerd[1601]: time="2026-01-14T13:41:07.007850464Z" level=error msg="Failed to destroy network for sandbox \"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.011187 containerd[1601]: time="2026-01-14T13:41:07.011013563Z" level=error msg="Failed to destroy network for sandbox \"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.013253 containerd[1601]: time="2026-01-14T13:41:07.013198922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtrzq,Uid:b2d973ed-942e-4bef-9dc2-ff5f578f60a0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.013954 kubelet[2830]: E0114 13:41:07.013853 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.013954 kubelet[2830]: E0114 13:41:07.013928 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xtrzq" Jan 14 13:41:07.013954 kubelet[2830]: E0114 13:41:07.013948 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xtrzq" Jan 14 13:41:07.014153 kubelet[2830]: E0114 13:41:07.013985 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xtrzq_kube-system(b2d973ed-942e-4bef-9dc2-ff5f578f60a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xtrzq_kube-system(b2d973ed-942e-4bef-9dc2-ff5f578f60a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40acc875fee2851aa2e5ddfe928645813e52dfa213cedebc37631985bd637bfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xtrzq" podUID="b2d973ed-942e-4bef-9dc2-ff5f578f60a0" Jan 14 13:41:07.022586 containerd[1601]: time="2026-01-14T13:41:07.022412206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755cbcdd74-2p5hz,Uid:db5c059b-487e-4ffe-80d9-50079eb46981,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.023273 kubelet[2830]: E0114 13:41:07.023193 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.023328 kubelet[2830]: E0114 13:41:07.023275 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-755cbcdd74-2p5hz" Jan 14 13:41:07.023328 kubelet[2830]: E0114 13:41:07.023295 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-755cbcdd74-2p5hz" Jan 14 13:41:07.023445 kubelet[2830]: E0114 13:41:07.023337 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-755cbcdd74-2p5hz_calico-system(db5c059b-487e-4ffe-80d9-50079eb46981)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-755cbcdd74-2p5hz_calico-system(db5c059b-487e-4ffe-80d9-50079eb46981)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c31d56cf2e41e9c1969b43fba4d7c4442472d0eedec8be730b8b6de4ec7600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-755cbcdd74-2p5hz" podUID="db5c059b-487e-4ffe-80d9-50079eb46981" Jan 14 13:41:07.023934 containerd[1601]: time="2026-01-14T13:41:07.023866069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd75cc4bb-k94tm,Uid:4c4e216b-6b87-484b-9cae-6f0965d6396b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.025027 kubelet[2830]: E0114 13:41:07.024983 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.025269 kubelet[2830]: E0114 13:41:07.025065 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" Jan 14 13:41:07.025269 kubelet[2830]: E0114 13:41:07.025089 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" Jan 14 13:41:07.025466 kubelet[2830]: E0114 13:41:07.025176 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb92c2f4d2bf262157714b7db8acad38f3c1507a360128d75e958e0ef25f226b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:07.026532 containerd[1601]: time="2026-01-14T13:41:07.026430178Z" level=error msg="Failed to destroy network for sandbox \"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.031587 containerd[1601]: time="2026-01-14T13:41:07.031531712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-wgpjs,Uid:cf16b57f-d6ab-45f8-acf0-0156a11bd169,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.031994 kubelet[2830]: E0114 13:41:07.031863 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.031994 kubelet[2830]: E0114 13:41:07.031926 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" Jan 14 13:41:07.031994 kubelet[2830]: E0114 13:41:07.031947 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" Jan 14 13:41:07.032192 kubelet[2830]: E0114 13:41:07.032049 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32f0957d9d603b68332cbb00c3add5fac87fb3a5c05660bea494e8993cffcfee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:07.034040 containerd[1601]: time="2026-01-14T13:41:07.033984577Z" level=error msg="Failed to destroy network for sandbox \"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.037701 containerd[1601]: time="2026-01-14T13:41:07.037562854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-7h7cg,Uid:8d93874e-aa0f-4caa-9b42-ab659ab91c41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.038014 kubelet[2830]: E0114 13:41:07.037942 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.038091 kubelet[2830]: E0114 13:41:07.038029 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" Jan 14 13:41:07.038091 kubelet[2830]: E0114 13:41:07.038051 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" Jan 14 13:41:07.038202 kubelet[2830]: E0114 13:41:07.038109 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78df6e108038f80fe6002b862c550997ca40431b6487431a7dd2b7041fae06c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:41:07.261415 systemd[1]: Created slice kubepods-besteffort-pod70f39dd7_0818_4b8e_a6d8_f99942268a1b.slice - libcontainer container kubepods-besteffort-pod70f39dd7_0818_4b8e_a6d8_f99942268a1b.slice. Jan 14 13:41:07.266341 containerd[1601]: time="2026-01-14T13:41:07.266286816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6pz,Uid:70f39dd7-0818-4b8e-a6d8-f99942268a1b,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:07.377346 containerd[1601]: time="2026-01-14T13:41:07.377108737Z" level=error msg="Failed to destroy network for sandbox \"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.381442 systemd[1]: run-netns-cni\x2d92015f64\x2d5a29\x2d1d61\x2deaa1\x2d2420f26569fe.mount: Deactivated successfully. Jan 14 13:41:07.383540 containerd[1601]: time="2026-01-14T13:41:07.382612353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6pz,Uid:70f39dd7-0818-4b8e-a6d8-f99942268a1b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.384622 kubelet[2830]: E0114 13:41:07.384505 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:41:07.385052 kubelet[2830]: E0114 13:41:07.384652 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:41:07.385052 kubelet[2830]: E0114 13:41:07.384783 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5l6pz" Jan 14 13:41:07.385052 kubelet[2830]: E0114 13:41:07.384925 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e364b002859f3ab1ba6bc452f67584303aa64a482e3e3fc4a100b90c3f9f2cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:07.408025 kubelet[2830]: E0114 13:41:07.407925 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:07.408723 containerd[1601]: time="2026-01-14T13:41:07.408596840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 13:41:15.036405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147220984.mount: Deactivated successfully. Jan 14 13:41:15.288782 containerd[1601]: time="2026-01-14T13:41:15.288243759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:15.289844 containerd[1601]: time="2026-01-14T13:41:15.289779621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 13:41:15.297802 containerd[1601]: time="2026-01-14T13:41:15.296985855Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:15.314859 containerd[1601]: time="2026-01-14T13:41:15.314282271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:41:15.319118 containerd[1601]: time="2026-01-14T13:41:15.318976545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.910324302s" Jan 14 13:41:15.319291 containerd[1601]: time="2026-01-14T13:41:15.319147484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 13:41:15.417136 containerd[1601]: time="2026-01-14T13:41:15.417029592Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:41:15.467773 containerd[1601]: time="2026-01-14T13:41:15.467105403Z" level=info msg="Container 3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:41:15.485499 containerd[1601]: time="2026-01-14T13:41:15.485426813Z" level=info msg="CreateContainer within sandbox \"a5cc20cc36b47da47f0112fb905adb99793b82e5ec00804b5edfa274abcc1f3a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c\"" Jan 14 13:41:15.489143 containerd[1601]: time="2026-01-14T13:41:15.489112596Z" level=info msg="StartContainer for \"3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c\"" Jan 14 13:41:15.491599 containerd[1601]: time="2026-01-14T13:41:15.491554595Z" level=info msg="connecting to shim 3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c" address="unix:///run/containerd/s/b62bf999b925cff1493dc029c9a330276dfc224b26d90aae956907991cb7ef45" protocol=ttrpc version=3 Jan 14 13:41:15.520899 systemd[1]: Started cri-containerd-3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c.scope - libcontainer container 3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c. Jan 14 13:41:15.636000 audit: BPF prog-id=170 op=LOAD Jan 14 13:41:15.646510 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 13:41:15.646645 kernel: audit: type=1334 audit(1768398075.636:559): prog-id=170 op=LOAD Jan 14 13:41:15.636000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.661354 kernel: audit: type=1300 audit(1768398075.636:559): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.661437 kernel: audit: type=1327 audit(1768398075.636:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: BPF prog-id=171 op=LOAD Jan 14 13:41:15.675845 kernel: audit: type=1334 audit(1768398075.636:560): prog-id=171 op=LOAD Jan 14 13:41:15.676009 kernel: audit: type=1300 audit(1768398075.636:560): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.636000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.699213 kernel: audit: type=1327 audit(1768398075.636:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: BPF prog-id=171 op=UNLOAD Jan 14 13:41:15.636000 audit[3929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.714866 kernel: audit: type=1334 audit(1768398075.636:561): prog-id=171 op=UNLOAD Jan 14 13:41:15.714934 kernel: audit: type=1300 audit(1768398075.636:561): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.714969 kernel: audit: type=1327 audit(1768398075.636:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: BPF prog-id=170 op=UNLOAD Jan 14 13:41:15.729899 kernel: audit: type=1334 audit(1768398075.636:562): prog-id=170 op=UNLOAD Jan 14 13:41:15.636000 audit[3929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.636000 audit: BPF prog-id=172 op=LOAD Jan 14 13:41:15.636000 audit[3929]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3365 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:15.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353665663331393637663437613830636432393034323930333130 Jan 14 13:41:15.745532 containerd[1601]: time="2026-01-14T13:41:15.745457337Z" level=info msg="StartContainer for \"3956ef31967f47a80cd2904290310b5b9e8571cc33d118b084dea12ff9227e0c\" returns successfully" Jan 14 13:41:15.877483 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:41:15.877624 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:41:16.090920 kubelet[2830]: I0114 13:41:16.090622 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-backend-key-pair\") pod \"db5c059b-487e-4ffe-80d9-50079eb46981\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " Jan 14 13:41:16.090920 kubelet[2830]: I0114 13:41:16.090816 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjdc\" (UniqueName: \"kubernetes.io/projected/db5c059b-487e-4ffe-80d9-50079eb46981-kube-api-access-qsjdc\") pod \"db5c059b-487e-4ffe-80d9-50079eb46981\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " Jan 14 13:41:16.090920 kubelet[2830]: I0114 13:41:16.090869 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-ca-bundle\") pod \"db5c059b-487e-4ffe-80d9-50079eb46981\" (UID: \"db5c059b-487e-4ffe-80d9-50079eb46981\") " Jan 14 13:41:16.091787 kubelet[2830]: I0114 13:41:16.091501 2830 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "db5c059b-487e-4ffe-80d9-50079eb46981" (UID: "db5c059b-487e-4ffe-80d9-50079eb46981"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 13:41:16.105823 kubelet[2830]: I0114 13:41:16.105451 2830 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5c059b-487e-4ffe-80d9-50079eb46981-kube-api-access-qsjdc" (OuterVolumeSpecName: "kube-api-access-qsjdc") pod "db5c059b-487e-4ffe-80d9-50079eb46981" (UID: "db5c059b-487e-4ffe-80d9-50079eb46981"). InnerVolumeSpecName "kube-api-access-qsjdc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 13:41:16.105782 systemd[1]: var-lib-kubelet-pods-db5c059b\x2d487e\x2d4ffe\x2d80d9\x2d50079eb46981-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsjdc.mount: Deactivated successfully. Jan 14 13:41:16.110497 systemd[1]: var-lib-kubelet-pods-db5c059b\x2d487e\x2d4ffe\x2d80d9\x2d50079eb46981-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 13:41:16.110918 kubelet[2830]: I0114 13:41:16.110846 2830 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "db5c059b-487e-4ffe-80d9-50079eb46981" (UID: "db5c059b-487e-4ffe-80d9-50079eb46981"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 13:41:16.192395 kubelet[2830]: I0114 13:41:16.192174 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 13:41:16.192395 kubelet[2830]: I0114 13:41:16.192346 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db5c059b-487e-4ffe-80d9-50079eb46981-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 13:41:16.192395 kubelet[2830]: I0114 13:41:16.192362 2830 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qsjdc\" (UniqueName: \"kubernetes.io/projected/db5c059b-487e-4ffe-80d9-50079eb46981-kube-api-access-qsjdc\") on node \"localhost\" DevicePath \"\"" Jan 14 13:41:16.502226 kubelet[2830]: E0114 13:41:16.501513 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:16.516394 systemd[1]: Removed slice kubepods-besteffort-poddb5c059b_487e_4ffe_80d9_50079eb46981.slice - libcontainer container kubepods-besteffort-poddb5c059b_487e_4ffe_80d9_50079eb46981.slice. Jan 14 13:41:16.529286 kubelet[2830]: I0114 13:41:16.529232 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nnjw8" podStartSLOduration=1.836580284 podStartE2EDuration="23.529217015s" podCreationTimestamp="2026-01-14 13:40:53 +0000 UTC" firstStartedPulling="2026-01-14 13:40:53.693003016 +0000 UTC m=+20.753573443" lastFinishedPulling="2026-01-14 13:41:15.385639736 +0000 UTC m=+42.446210174" observedRunningTime="2026-01-14 13:41:16.52663645 +0000 UTC m=+43.587206917" watchObservedRunningTime="2026-01-14 13:41:16.529217015 +0000 UTC m=+43.589787452" Jan 14 13:41:16.615464 systemd[1]: Created slice kubepods-besteffort-pod0591f86b_0203_41aa_ad8c_78b05a83945e.slice - libcontainer container kubepods-besteffort-pod0591f86b_0203_41aa_ad8c_78b05a83945e.slice. Jan 14 13:41:16.699512 kubelet[2830]: I0114 13:41:16.699381 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtm9\" (UniqueName: \"kubernetes.io/projected/0591f86b-0203-41aa-ad8c-78b05a83945e-kube-api-access-cmtm9\") pod \"whisker-66cc84c7bd-ghp9r\" (UID: \"0591f86b-0203-41aa-ad8c-78b05a83945e\") " pod="calico-system/whisker-66cc84c7bd-ghp9r" Jan 14 13:41:16.699512 kubelet[2830]: I0114 13:41:16.699488 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0591f86b-0203-41aa-ad8c-78b05a83945e-whisker-backend-key-pair\") pod \"whisker-66cc84c7bd-ghp9r\" (UID: \"0591f86b-0203-41aa-ad8c-78b05a83945e\") " pod="calico-system/whisker-66cc84c7bd-ghp9r" Jan 14 13:41:16.699756 kubelet[2830]: I0114 13:41:16.699538 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0591f86b-0203-41aa-ad8c-78b05a83945e-whisker-ca-bundle\") pod \"whisker-66cc84c7bd-ghp9r\" (UID: \"0591f86b-0203-41aa-ad8c-78b05a83945e\") " pod="calico-system/whisker-66cc84c7bd-ghp9r" Jan 14 13:41:16.923932 containerd[1601]: time="2026-01-14T13:41:16.923797670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc84c7bd-ghp9r,Uid:0591f86b-0203-41aa-ad8c-78b05a83945e,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:17.231277 systemd-networkd[1510]: cali98179252a57: Link UP Jan 14 13:41:17.232020 systemd-networkd[1510]: cali98179252a57: Gained carrier Jan 14 13:41:17.264775 kubelet[2830]: I0114 13:41:17.264052 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5c059b-487e-4ffe-80d9-50079eb46981" path="/var/lib/kubelet/pods/db5c059b-487e-4ffe-80d9-50079eb46981/volumes" Jan 14 13:41:17.268831 containerd[1601]: 2026-01-14 13:41:16.963 [INFO][4023] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:41:17.268831 containerd[1601]: 2026-01-14 13:41:16.992 [INFO][4023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0 whisker-66cc84c7bd- calico-system 0591f86b-0203-41aa-ad8c-78b05a83945e 954 0 2026-01-14 13:41:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66cc84c7bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66cc84c7bd-ghp9r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali98179252a57 [] [] }} ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-" Jan 14 13:41:17.268831 containerd[1601]: 2026-01-14 13:41:16.992 [INFO][4023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.268831 containerd[1601]: 2026-01-14 13:41:17.143 [INFO][4038] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" HandleID="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Workload="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.144 [INFO][4038] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" HandleID="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Workload="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66cc84c7bd-ghp9r", "timestamp":"2026-01-14 13:41:17.143005418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.146 [INFO][4038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.147 [INFO][4038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.148 [INFO][4038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.164 [INFO][4038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" host="localhost" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.175 [INFO][4038] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.182 [INFO][4038] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.184 [INFO][4038] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.188 [INFO][4038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:17.269243 containerd[1601]: 2026-01-14 13:41:17.188 [INFO][4038] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" host="localhost" Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.190 [INFO][4038] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849 Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.197 [INFO][4038] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" host="localhost" Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.206 [INFO][4038] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" host="localhost" Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.206 [INFO][4038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" host="localhost" Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.206 [INFO][4038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:17.269492 containerd[1601]: 2026-01-14 13:41:17.206 [INFO][4038] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" HandleID="k8s-pod-network.f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Workload="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.269612 containerd[1601]: 2026-01-14 13:41:17.212 [INFO][4023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0", GenerateName:"whisker-66cc84c7bd-", Namespace:"calico-system", SelfLink:"", UID:"0591f86b-0203-41aa-ad8c-78b05a83945e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cc84c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66cc84c7bd-ghp9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali98179252a57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:17.269612 containerd[1601]: 2026-01-14 13:41:17.212 [INFO][4023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.269850 containerd[1601]: 2026-01-14 13:41:17.212 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98179252a57 ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.269850 containerd[1601]: 2026-01-14 13:41:17.226 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.269919 containerd[1601]: 2026-01-14 13:41:17.228 [INFO][4023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0", GenerateName:"whisker-66cc84c7bd-", Namespace:"calico-system", SelfLink:"", UID:"0591f86b-0203-41aa-ad8c-78b05a83945e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cc84c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849", Pod:"whisker-66cc84c7bd-ghp9r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali98179252a57", MAC:"a6:2e:e5:4b:37:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:17.270250 containerd[1601]: 2026-01-14 13:41:17.264 [INFO][4023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" Namespace="calico-system" Pod="whisker-66cc84c7bd-ghp9r" WorkloadEndpoint="localhost-k8s-whisker--66cc84c7bd--ghp9r-eth0" Jan 14 13:41:17.452078 containerd[1601]: time="2026-01-14T13:41:17.451970626Z" level=info msg="connecting to shim f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849" address="unix:///run/containerd/s/574ce919e27d4d243279814aaca369e4bff772b23833646b54a22aacf0c974fc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:17.511417 kubelet[2830]: E0114 13:41:17.511245 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:17.564454 systemd[1]: Started cri-containerd-f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849.scope - libcontainer container f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849. Jan 14 13:41:17.605000 audit: BPF prog-id=173 op=LOAD Jan 14 13:41:17.611000 audit: BPF prog-id=174 op=LOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=174 op=UNLOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=175 op=LOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=176 op=LOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=176 op=UNLOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=175 op=UNLOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.611000 audit: BPF prog-id=177 op=LOAD Jan 14 13:41:17.611000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4142 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:17.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393233333866303936333631316430666365656263363466376661 Jan 14 13:41:17.615141 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:17.708384 containerd[1601]: time="2026-01-14T13:41:17.708173508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc84c7bd-ghp9r,Uid:0591f86b-0203-41aa-ad8c-78b05a83945e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f892338f0963611d0fceebc64f7fafbb2b758bf5adba51beb90bcaa18c3d4849\"" Jan 14 13:41:17.715179 containerd[1601]: time="2026-01-14T13:41:17.715149393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:41:17.804055 containerd[1601]: time="2026-01-14T13:41:17.803865279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:17.806177 containerd[1601]: time="2026-01-14T13:41:17.806079904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:41:17.806177 containerd[1601]: time="2026-01-14T13:41:17.806144351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:17.806583 kubelet[2830]: E0114 13:41:17.806474 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:17.806784 kubelet[2830]: E0114 13:41:17.806589 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:17.812640 kubelet[2830]: E0114 13:41:17.812437 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e68290aaef264f96a3b305ab225972cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:17.815327 containerd[1601]: time="2026-01-14T13:41:17.815247637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:41:17.911119 containerd[1601]: time="2026-01-14T13:41:17.911025417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:17.912587 containerd[1601]: time="2026-01-14T13:41:17.912519538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:41:17.912837 containerd[1601]: time="2026-01-14T13:41:17.912598896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:17.913013 kubelet[2830]: E0114 13:41:17.912949 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:17.913013 kubelet[2830]: E0114 13:41:17.913002 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:17.913280 kubelet[2830]: E0114 13:41:17.913155 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:17.914606 kubelet[2830]: E0114 13:41:17.914459 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:18.250638 containerd[1601]: time="2026-01-14T13:41:18.250453854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6pz,Uid:70f39dd7-0818-4b8e-a6d8-f99942268a1b,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:18.251404 containerd[1601]: time="2026-01-14T13:41:18.250488450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd75cc4bb-k94tm,Uid:4c4e216b-6b87-484b-9cae-6f0965d6396b,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:18.340127 systemd-networkd[1510]: cali98179252a57: Gained IPv6LL Jan 14 13:41:18.497902 systemd-networkd[1510]: cali216b33d4066: Link UP Jan 14 13:41:18.498605 systemd-networkd[1510]: cali216b33d4066: Gained carrier Jan 14 13:41:18.521189 kubelet[2830]: E0114 13:41:18.520889 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:18.527755 containerd[1601]: 2026-01-14 13:41:18.306 [INFO][4229] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:41:18.527755 containerd[1601]: 2026-01-14 13:41:18.361 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5l6pz-eth0 csi-node-driver- calico-system 70f39dd7-0818-4b8e-a6d8-f99942268a1b 762 0 2026-01-14 13:40:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5l6pz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali216b33d4066 [] [] }} ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-" Jan 14 13:41:18.527755 containerd[1601]: 2026-01-14 13:41:18.361 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.527755 containerd[1601]: 2026-01-14 13:41:18.417 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" HandleID="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Workload="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" HandleID="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Workload="localhost-k8s-csi--node--driver--5l6pz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000798290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5l6pz", "timestamp":"2026-01-14 13:41:18.417808344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.431 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" host="localhost" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.446 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.462 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.465 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.469 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:18.528005 containerd[1601]: 2026-01-14 13:41:18.469 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" host="localhost" Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.471 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87 Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.477 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" host="localhost" Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" host="localhost" Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" host="localhost" Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:18.528247 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" HandleID="k8s-pod-network.d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Workload="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.528358 containerd[1601]: 2026-01-14 13:41:18.491 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5l6pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70f39dd7-0818-4b8e-a6d8-f99942268a1b", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5l6pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali216b33d4066", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:18.528428 containerd[1601]: 2026-01-14 13:41:18.491 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.528428 containerd[1601]: 2026-01-14 13:41:18.491 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali216b33d4066 ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.528428 containerd[1601]: 2026-01-14 13:41:18.498 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.528498 containerd[1601]: 2026-01-14 13:41:18.500 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5l6pz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70f39dd7-0818-4b8e-a6d8-f99942268a1b", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87", Pod:"csi-node-driver-5l6pz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali216b33d4066", MAC:"9a:da:f1:e9:fc:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:18.528562 containerd[1601]: 2026-01-14 13:41:18.522 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" Namespace="calico-system" Pod="csi-node-driver-5l6pz" WorkloadEndpoint="localhost-k8s-csi--node--driver--5l6pz-eth0" Jan 14 13:41:18.584357 containerd[1601]: time="2026-01-14T13:41:18.584256213Z" level=info msg="connecting to shim d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87" address="unix:///run/containerd/s/e0c320214497f6e153f24b0a6766a5b7c4ba3c95ff1b185ed55b228ef02b0dab" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:18.595000 audit[4302]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:18.595000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcfbe3d290 a2=0 a3=7ffcfbe3d27c items=0 ppid=2969 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:18.605000 audit[4302]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:18.605000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfbe3d290 a2=0 a3=0 items=0 ppid=2969 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:18.617305 systemd-networkd[1510]: calib448a9a7cd8: Link UP Jan 14 13:41:18.618380 systemd-networkd[1510]: calib448a9a7cd8: Gained carrier Jan 14 13:41:18.626133 systemd[1]: Started cri-containerd-d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87.scope - libcontainer container d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87. Jan 14 13:41:18.658108 containerd[1601]: 2026-01-14 13:41:18.304 [INFO][4231] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:41:18.658108 containerd[1601]: 2026-01-14 13:41:18.331 [INFO][4231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0 calico-kube-controllers-5cd75cc4bb- calico-system 4c4e216b-6b87-484b-9cae-6f0965d6396b 873 0 2026-01-14 13:40:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cd75cc4bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cd75cc4bb-k94tm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib448a9a7cd8 [] [] }} ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-" Jan 14 13:41:18.658108 containerd[1601]: 2026-01-14 13:41:18.331 [INFO][4231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.658108 containerd[1601]: 2026-01-14 13:41:18.417 [INFO][4259] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" HandleID="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Workload="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4259] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" HandleID="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Workload="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cd75cc4bb-k94tm", "timestamp":"2026-01-14 13:41:18.417540149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.418 [INFO][4259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.486 [INFO][4259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.531 [INFO][4259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" host="localhost" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.553 [INFO][4259] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.569 [INFO][4259] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.575 [INFO][4259] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.579 [INFO][4259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:18.658525 containerd[1601]: 2026-01-14 13:41:18.580 [INFO][4259] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" host="localhost" Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.583 [INFO][4259] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0 Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.591 [INFO][4259] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" host="localhost" Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.606 [INFO][4259] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" host="localhost" Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.606 [INFO][4259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" host="localhost" Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.606 [INFO][4259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:18.659635 containerd[1601]: 2026-01-14 13:41:18.606 [INFO][4259] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" HandleID="k8s-pod-network.b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Workload="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.660220 containerd[1601]: 2026-01-14 13:41:18.612 [INFO][4231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0", GenerateName:"calico-kube-controllers-5cd75cc4bb-", Namespace:"calico-system", SelfLink:"", UID:"4c4e216b-6b87-484b-9cae-6f0965d6396b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd75cc4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cd75cc4bb-k94tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib448a9a7cd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:18.660325 containerd[1601]: 2026-01-14 13:41:18.613 [INFO][4231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.660325 containerd[1601]: 2026-01-14 13:41:18.613 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib448a9a7cd8 ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.660325 containerd[1601]: 2026-01-14 13:41:18.617 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.660393 containerd[1601]: 2026-01-14 13:41:18.622 [INFO][4231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0", GenerateName:"calico-kube-controllers-5cd75cc4bb-", Namespace:"calico-system", SelfLink:"", UID:"4c4e216b-6b87-484b-9cae-6f0965d6396b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd75cc4bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0", Pod:"calico-kube-controllers-5cd75cc4bb-k94tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib448a9a7cd8", MAC:"32:7d:d8:90:04:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:18.660465 containerd[1601]: 2026-01-14 13:41:18.653 [INFO][4231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" Namespace="calico-system" Pod="calico-kube-controllers-5cd75cc4bb-k94tm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cd75cc4bb--k94tm-eth0" Jan 14 13:41:18.695000 audit: BPF prog-id=178 op=LOAD Jan 14 13:41:18.697000 audit: BPF prog-id=179 op=LOAD Jan 14 13:41:18.697000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.698000 audit: BPF prog-id=179 op=UNLOAD Jan 14 13:41:18.698000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.699000 audit: BPF prog-id=180 op=LOAD Jan 14 13:41:18.699000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.699000 audit: BPF prog-id=181 op=LOAD Jan 14 13:41:18.699000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.699000 audit: BPF prog-id=181 op=UNLOAD Jan 14 13:41:18.699000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.700000 audit: BPF prog-id=180 op=UNLOAD Jan 14 13:41:18.700000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.700000 audit: BPF prog-id=182 op=LOAD Jan 14 13:41:18.700000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4293 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437326135393232346362626432646438333534643161333034623662 Jan 14 13:41:18.704069 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:18.719786 containerd[1601]: time="2026-01-14T13:41:18.719619240Z" level=info msg="connecting to shim b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0" address="unix:///run/containerd/s/a110ab6bbcd827abe9e4b45f44007e4bb0f3563fbe0834c05911e9e084bb8ce1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:18.784993 systemd[1]: Started cri-containerd-b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0.scope - libcontainer container b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0. Jan 14 13:41:18.801443 containerd[1601]: time="2026-01-14T13:41:18.801270533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6pz,Uid:70f39dd7-0818-4b8e-a6d8-f99942268a1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d72a59224cbbd2dd8354d1a304b6bc9d7f236eca33788e05fb95d9a1e98ccf87\"" Jan 14 13:41:18.809379 containerd[1601]: time="2026-01-14T13:41:18.809289456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:41:18.843000 audit: BPF prog-id=183 op=LOAD Jan 14 13:41:18.845000 audit: BPF prog-id=184 op=LOAD Jan 14 13:41:18.845000 audit[4365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.848000 audit: BPF prog-id=184 op=UNLOAD Jan 14 13:41:18.848000 audit[4365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.849000 audit: BPF prog-id=185 op=LOAD Jan 14 13:41:18.849000 audit[4365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.852000 audit: BPF prog-id=186 op=LOAD Jan 14 13:41:18.852000 audit[4365]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.855000 audit: BPF prog-id=186 op=UNLOAD Jan 14 13:41:18.855000 audit[4365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.855000 audit: BPF prog-id=185 op=UNLOAD Jan 14 13:41:18.855000 audit[4365]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.856000 audit: BPF prog-id=187 op=LOAD Jan 14 13:41:18.856000 audit[4365]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4353 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:18.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383533643237323264363163363638393064386635643838623830 Jan 14 13:41:18.863494 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:18.915463 containerd[1601]: time="2026-01-14T13:41:18.915367779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd75cc4bb-k94tm,Uid:4c4e216b-6b87-484b-9cae-6f0965d6396b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4853d2722d61c66890d8f5d88b8010f72f0743fa113af5a0ffd8030ce1111a0\"" Jan 14 13:41:18.927252 containerd[1601]: time="2026-01-14T13:41:18.927190502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:18.928761 containerd[1601]: time="2026-01-14T13:41:18.928623446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:41:18.928948 containerd[1601]: time="2026-01-14T13:41:18.928784018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:18.929421 kubelet[2830]: E0114 13:41:18.929271 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:18.929421 kubelet[2830]: E0114 13:41:18.929334 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:18.929741 kubelet[2830]: E0114 13:41:18.929572 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:18.930028 containerd[1601]: time="2026-01-14T13:41:18.929924865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:41:19.038174 containerd[1601]: time="2026-01-14T13:41:19.037923504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:19.039945 containerd[1601]: time="2026-01-14T13:41:19.039801549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:41:19.040319 kubelet[2830]: E0114 13:41:19.040243 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:19.040387 kubelet[2830]: E0114 13:41:19.040318 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:19.040585 kubelet[2830]: E0114 13:41:19.040527 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:19.042899 kubelet[2830]: E0114 13:41:19.042797 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:19.066321 containerd[1601]: time="2026-01-14T13:41:19.039835834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:19.066321 containerd[1601]: time="2026-01-14T13:41:19.041293109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:41:19.184344 containerd[1601]: time="2026-01-14T13:41:19.183892278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:19.185895 containerd[1601]: time="2026-01-14T13:41:19.185804004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:41:19.186138 containerd[1601]: time="2026-01-14T13:41:19.185908533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:19.186745 kubelet[2830]: E0114 13:41:19.186551 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:19.186843 kubelet[2830]: E0114 13:41:19.186660 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:19.187291 kubelet[2830]: E0114 13:41:19.187152 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:19.189286 kubelet[2830]: E0114 13:41:19.189202 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:19.478539 kubelet[2830]: I0114 13:41:19.478422 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:41:19.480092 kubelet[2830]: E0114 13:41:19.479099 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:19.525101 kubelet[2830]: E0114 13:41:19.524502 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:19.524000 audit[4412]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:19.524000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe8953ce00 a2=0 a3=7ffe8953cdec items=0 ppid=2969 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:19.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:19.529039 kubelet[2830]: E0114 13:41:19.528443 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:19.529615 kubelet[2830]: E0114 13:41:19.529588 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:19.531651 kubelet[2830]: E0114 13:41:19.531531 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:19.531000 audit[4412]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:19.531000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe8953ce00 a2=0 a3=7ffe8953cdec items=0 ppid=2969 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:19.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:19.938293 systemd-networkd[1510]: cali216b33d4066: Gained IPv6LL Jan 14 13:41:20.260591 systemd-networkd[1510]: calib448a9a7cd8: Gained IPv6LL Jan 14 13:41:20.267027 kubelet[2830]: E0114 13:41:20.265215 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:20.267152 containerd[1601]: time="2026-01-14T13:41:20.266585437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-7h7cg,Uid:8d93874e-aa0f-4caa-9b42-ab659ab91c41,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:41:20.267609 containerd[1601]: time="2026-01-14T13:41:20.267453397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h76mq,Uid:a23cfe8e-2547-4e36-9f35-c3f56eeb09a7,Namespace:kube-system,Attempt:0,}" Jan 14 13:41:20.267884 kubelet[2830]: E0114 13:41:20.267791 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:20.268916 containerd[1601]: time="2026-01-14T13:41:20.268084535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtrzq,Uid:b2d973ed-942e-4bef-9dc2-ff5f578f60a0,Namespace:kube-system,Attempt:0,}" Jan 14 13:41:20.525000 audit: BPF prog-id=188 op=LOAD Jan 14 13:41:20.525000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8478bf40 a2=98 a3=1fffffffffffffff items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.525000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.525000 audit: BPF prog-id=188 op=UNLOAD Jan 14 13:41:20.525000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8478bf10 a3=0 items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.525000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.526000 audit: BPF prog-id=189 op=LOAD Jan 14 13:41:20.526000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8478be20 a2=94 a3=3 items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.526000 audit: BPF prog-id=189 op=UNLOAD Jan 14 13:41:20.526000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8478be20 a2=94 a3=3 items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.526000 audit: BPF prog-id=190 op=LOAD Jan 14 13:41:20.526000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8478be60 a2=94 a3=7ffe8478c040 items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.526000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.527000 audit: BPF prog-id=190 op=UNLOAD Jan 14 13:41:20.527000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8478be60 a2=94 a3=7ffe8478c040 items=0 ppid=4432 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.527000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:41:20.531966 kubelet[2830]: E0114 13:41:20.531864 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:20.548861 kubelet[2830]: E0114 13:41:20.548063 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:20.569000 audit: BPF prog-id=191 op=LOAD Jan 14 13:41:20.569000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd22875630 a2=98 a3=3 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.569000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.570000 audit: BPF prog-id=191 op=UNLOAD Jan 14 13:41:20.570000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd22875600 a3=0 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.570000 audit: BPF prog-id=192 op=LOAD Jan 14 13:41:20.570000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd22875420 a2=94 a3=54428f items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.570000 audit: BPF prog-id=192 op=UNLOAD Jan 14 13:41:20.570000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd22875420 a2=94 a3=54428f items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.570000 audit: BPF prog-id=193 op=LOAD Jan 14 13:41:20.570000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd22875450 a2=94 a3=2 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.570000 audit: BPF prog-id=193 op=UNLOAD Jan 14 13:41:20.570000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd22875450 a2=0 a3=2 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.570000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:20.648537 systemd-networkd[1510]: caliebfd9d475a1: Link UP Jan 14 13:41:20.649989 systemd-networkd[1510]: caliebfd9d475a1: Gained carrier Jan 14 13:41:20.669115 containerd[1601]: 2026-01-14 13:41:20.393 [INFO][4464] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:41:20.669115 containerd[1601]: 2026-01-14 13:41:20.413 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h76mq-eth0 coredns-668d6bf9bc- kube-system a23cfe8e-2547-4e36-9f35-c3f56eeb09a7 879 0 2026-01-14 13:40:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h76mq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebfd9d475a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-" Jan 14 13:41:20.669115 containerd[1601]: 2026-01-14 13:41:20.413 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.669115 containerd[1601]: 2026-01-14 13:41:20.531 [INFO][4512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" HandleID="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Workload="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.531 [INFO][4512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" HandleID="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Workload="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f900), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h76mq", "timestamp":"2026-01-14 13:41:20.531296467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.531 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.531 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.531 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.586 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" host="localhost" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.611 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.618 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.620 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.623 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.669405 containerd[1601]: 2026-01-14 13:41:20.624 [INFO][4512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" host="localhost" Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.626 [INFO][4512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.630 [INFO][4512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" host="localhost" Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.641 [INFO][4512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" host="localhost" Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.642 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" host="localhost" Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.642 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:20.669805 containerd[1601]: 2026-01-14 13:41:20.642 [INFO][4512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" HandleID="k8s-pod-network.8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Workload="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.669935 containerd[1601]: 2026-01-14 13:41:20.645 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h76mq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a23cfe8e-2547-4e36-9f35-c3f56eeb09a7", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h76mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebfd9d475a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.670023 containerd[1601]: 2026-01-14 13:41:20.645 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.670023 containerd[1601]: 2026-01-14 13:41:20.645 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebfd9d475a1 ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.670023 containerd[1601]: 2026-01-14 13:41:20.648 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.670100 containerd[1601]: 2026-01-14 13:41:20.649 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h76mq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a23cfe8e-2547-4e36-9f35-c3f56eeb09a7", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d", Pod:"coredns-668d6bf9bc-h76mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebfd9d475a1", MAC:"8e:45:50:5a:8f:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.670100 containerd[1601]: 2026-01-14 13:41:20.663 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-h76mq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h76mq-eth0" Jan 14 13:41:20.722416 containerd[1601]: time="2026-01-14T13:41:20.722211053Z" level=info msg="connecting to shim 8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d" address="unix:///run/containerd/s/e251c7d3938a8a610a57dab180fb99f394725c2fb3ee662b2d9a7753000daef7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:20.781140 systemd[1]: Started cri-containerd-8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d.scope - libcontainer container 8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d. Jan 14 13:41:20.790637 systemd-networkd[1510]: califc0b9d74573: Link UP Jan 14 13:41:20.794017 systemd-networkd[1510]: califc0b9d74573: Gained carrier Jan 14 13:41:20.820000 audit: BPF prog-id=194 op=LOAD Jan 14 13:41:20.827340 kernel: kauditd_printk_skb: 119 callbacks suppressed Jan 14 13:41:20.827540 kernel: audit: type=1334 audit(1768398080.820:604): prog-id=194 op=LOAD Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.484 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0 calico-apiserver-7d8b4854- calico-apiserver 8d93874e-aa0f-4caa-9b42-ab659ab91c41 882 0 2026-01-14 13:40:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d8b4854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d8b4854-7h7cg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc0b9d74573 [] [] }} ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.484 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.601 [INFO][4524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" HandleID="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Workload="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.602 [INFO][4524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" HandleID="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Workload="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d8b4854-7h7cg", "timestamp":"2026-01-14 13:41:20.60162796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.602 [INFO][4524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.642 [INFO][4524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.642 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.688 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.703 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.725 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.729 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.737 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.737 [INFO][4524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.752 [INFO][4524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218 Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.758 [INFO][4524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.773 [INFO][4524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.773 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" host="localhost" Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.777 [INFO][4524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:20.827576 containerd[1601]: 2026-01-14 13:41:20.777 [INFO][4524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" HandleID="k8s-pod-network.1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Workload="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.784 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0", GenerateName:"calico-apiserver-7d8b4854-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d93874e-aa0f-4caa-9b42-ab659ab91c41", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b4854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d8b4854-7h7cg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc0b9d74573", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.785 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.785 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc0b9d74573 ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.796 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.797 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0", GenerateName:"calico-apiserver-7d8b4854-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d93874e-aa0f-4caa-9b42-ab659ab91c41", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b4854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218", Pod:"calico-apiserver-7d8b4854-7h7cg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc0b9d74573", MAC:"aa:c4:41:d0:42:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.828431 containerd[1601]: 2026-01-14 13:41:20.815 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-7h7cg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--7h7cg-eth0" Jan 14 13:41:20.820000 audit: BPF prog-id=195 op=LOAD Jan 14 13:41:20.820000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.833495 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:20.858532 kernel: audit: type=1334 audit(1768398080.820:605): prog-id=195 op=LOAD Jan 14 13:41:20.858616 kernel: audit: type=1300 audit(1768398080.820:605): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.820000 audit: BPF prog-id=195 op=UNLOAD Jan 14 13:41:20.881744 kernel: audit: type=1327 audit(1768398080.820:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.881829 kernel: audit: type=1334 audit(1768398080.820:606): prog-id=195 op=UNLOAD Jan 14 13:41:20.881849 kernel: audit: type=1300 audit(1768398080.820:606): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.820000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.913489 kernel: audit: type=1327 audit(1768398080.820:606): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.821000 audit: BPF prog-id=196 op=LOAD Jan 14 13:41:20.915050 systemd-networkd[1510]: calid6303e95eb7: Link UP Jan 14 13:41:20.918278 systemd-networkd[1510]: calid6303e95eb7: Gained carrier Jan 14 13:41:20.937758 kernel: audit: type=1334 audit(1768398080.821:607): prog-id=196 op=LOAD Jan 14 13:41:20.937902 kernel: audit: type=1300 audit(1768398080.821:607): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.499 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0 coredns-668d6bf9bc- kube-system b2d973ed-942e-4bef-9dc2-ff5f578f60a0 880 0 2026-01-14 13:40:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xtrzq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid6303e95eb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.499 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.614 [INFO][4534] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" HandleID="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Workload="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.615 [INFO][4534] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" HandleID="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Workload="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003102d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xtrzq", "timestamp":"2026-01-14 13:41:20.61483588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.615 [INFO][4534] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.773 [INFO][4534] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.774 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.800 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.817 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.851 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.856 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.863 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.864 [INFO][4534] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.868 [INFO][4534] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.878 [INFO][4534] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.890 [INFO][4534] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.891 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" host="localhost" Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.892 [INFO][4534] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:20.950591 containerd[1601]: 2026-01-14 13:41:20.893 [INFO][4534] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" HandleID="k8s-pod-network.8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Workload="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.951544 kernel: audit: type=1327 audit(1768398080.821:607): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.901 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2d973ed-942e-4bef-9dc2-ff5f578f60a0", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xtrzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6303e95eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.902 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.902 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6303e95eb7 ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.919 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.919 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2d973ed-942e-4bef-9dc2-ff5f578f60a0", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a", Pod:"coredns-668d6bf9bc-xtrzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid6303e95eb7", MAC:"3a:03:85:3a:87:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:20.951596 containerd[1601]: 2026-01-14 13:41:20.938 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtrzq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xtrzq-eth0" Jan 14 13:41:20.821000 audit: BPF prog-id=197 op=LOAD Jan 14 13:41:20.821000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.821000 audit: BPF prog-id=197 op=UNLOAD Jan 14 13:41:20.821000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.821000 audit: BPF prog-id=196 op=UNLOAD Jan 14 13:41:20.821000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.821000 audit: BPF prog-id=198 op=LOAD Jan 14 13:41:20.821000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4565 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:20.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356338383939386463353966333932383938393364386134386662 Jan 14 13:41:20.993781 containerd[1601]: time="2026-01-14T13:41:20.993045526Z" level=info msg="connecting to shim 1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218" address="unix:///run/containerd/s/4a312953d87492a5b7c6ffc57f300d38d45ae062592029a494ccdc14bf7e2e59" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:21.020457 containerd[1601]: time="2026-01-14T13:41:21.020329379Z" level=info msg="connecting to shim 8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a" address="unix:///run/containerd/s/fa311f21ae4515803bc5220d46ba04feb2a1da59181139c53ca4b25a2e9de980" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:21.035429 containerd[1601]: time="2026-01-14T13:41:21.035169425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h76mq,Uid:a23cfe8e-2547-4e36-9f35-c3f56eeb09a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d\"" Jan 14 13:41:21.045976 kubelet[2830]: E0114 13:41:21.045891 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:21.066584 systemd[1]: Started cri-containerd-1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218.scope - libcontainer container 1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218. Jan 14 13:41:21.070818 containerd[1601]: time="2026-01-14T13:41:21.070340541Z" level=info msg="CreateContainer within sandbox \"8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:41:21.085000 audit: BPF prog-id=199 op=LOAD Jan 14 13:41:21.085000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd22875310 a2=94 a3=1 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.086000 audit: BPF prog-id=199 op=UNLOAD Jan 14 13:41:21.086000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd22875310 a2=94 a3=1 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.110000 audit: BPF prog-id=200 op=LOAD Jan 14 13:41:21.110000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd22875300 a2=94 a3=4 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.110000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.110000 audit: BPF prog-id=200 op=UNLOAD Jan 14 13:41:21.110000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd22875300 a2=0 a3=4 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.110000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.111000 audit: BPF prog-id=201 op=LOAD Jan 14 13:41:21.111000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd22875160 a2=94 a3=5 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.111000 audit: BPF prog-id=201 op=UNLOAD Jan 14 13:41:21.111000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd22875160 a2=0 a3=5 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.111000 audit: BPF prog-id=202 op=LOAD Jan 14 13:41:21.111000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd22875380 a2=94 a3=6 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.111000 audit: BPF prog-id=202 op=UNLOAD Jan 14 13:41:21.111000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd22875380 a2=0 a3=6 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.112000 audit: BPF prog-id=203 op=LOAD Jan 14 13:41:21.112000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd22874b30 a2=94 a3=88 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.112000 audit: BPF prog-id=204 op=LOAD Jan 14 13:41:21.112000 audit[4545]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd228749b0 a2=94 a3=2 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.112000 audit: BPF prog-id=204 op=UNLOAD Jan 14 13:41:21.112000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd228749e0 a2=0 a3=7ffd22874ae0 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.118000 audit: BPF prog-id=203 op=UNLOAD Jan 14 13:41:21.118000 audit[4545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2f4eed10 a2=0 a3=b9ec128cb30bd6f1 items=0 ppid=4432 pid=4545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.118000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:41:21.119862 containerd[1601]: time="2026-01-14T13:41:21.119432627Z" level=info msg="Container 85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:41:21.130567 containerd[1601]: time="2026-01-14T13:41:21.130451202Z" level=info msg="CreateContainer within sandbox \"8a5c88998dc59f39289893d8a48fb93ca1366315f72b5bd971762b02a7301a7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c\"" Jan 14 13:41:21.131772 containerd[1601]: time="2026-01-14T13:41:21.131642677Z" level=info msg="StartContainer for \"85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c\"" Jan 14 13:41:21.138500 containerd[1601]: time="2026-01-14T13:41:21.138394443Z" level=info msg="connecting to shim 85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c" address="unix:///run/containerd/s/e251c7d3938a8a610a57dab180fb99f394725c2fb3ee662b2d9a7753000daef7" protocol=ttrpc version=3 Jan 14 13:41:21.144281 systemd[1]: Started cri-containerd-8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a.scope - libcontainer container 8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a. Jan 14 13:41:21.158000 audit: BPF prog-id=205 op=LOAD Jan 14 13:41:21.159000 audit: BPF prog-id=206 op=LOAD Jan 14 13:41:21.159000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.159000 audit: BPF prog-id=206 op=UNLOAD Jan 14 13:41:21.159000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.159000 audit: BPF prog-id=207 op=LOAD Jan 14 13:41:21.159000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.162000 audit: BPF prog-id=208 op=LOAD Jan 14 13:41:21.162000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.163000 audit: BPF prog-id=209 op=LOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0361d720 a2=98 a3=1999999999999999 items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.163000 audit: BPF prog-id=209 op=UNLOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0361d6f0 a3=0 items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.163000 audit: BPF prog-id=210 op=LOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0361d600 a2=94 a3=ffff items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.163000 audit: BPF prog-id=210 op=UNLOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0361d600 a2=94 a3=ffff items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.163000 audit: BPF prog-id=211 op=LOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0361d640 a2=94 a3=7ffd0361d820 items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.163000 audit: BPF prog-id=211 op=UNLOAD Jan 14 13:41:21.163000 audit[4702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0361d640 a2=94 a3=7ffd0361d820 items=0 ppid=4432 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.163000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:41:21.165000 audit: BPF prog-id=208 op=UNLOAD Jan 14 13:41:21.165000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.166000 audit: BPF prog-id=207 op=UNLOAD Jan 14 13:41:21.166000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.168000 audit: BPF prog-id=212 op=LOAD Jan 14 13:41:21.168000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=4618 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164386438643564623532373139376433613036623933343566633934 Jan 14 13:41:21.170000 audit: BPF prog-id=213 op=LOAD Jan 14 13:41:21.171000 audit: BPF prog-id=214 op=LOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=214 op=UNLOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=215 op=LOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=216 op=LOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=216 op=UNLOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=215 op=UNLOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.171000 audit: BPF prog-id=217 op=LOAD Jan 14 13:41:21.171000 audit[4667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4643 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616162333238656536626436383434313230393339346665336161 Jan 14 13:41:21.173607 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:21.188624 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:21.200257 systemd[1]: Started cri-containerd-85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c.scope - libcontainer container 85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c. Jan 14 13:41:21.267345 containerd[1601]: time="2026-01-14T13:41:21.267127343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26r2w,Uid:d3d1f008-c373-460c-bb65-2604d6d39838,Namespace:calico-system,Attempt:0,}" Jan 14 13:41:21.269578 containerd[1601]: time="2026-01-14T13:41:21.269223740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-wgpjs,Uid:cf16b57f-d6ab-45f8-acf0-0156a11bd169,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:41:21.278000 audit: BPF prog-id=218 op=LOAD Jan 14 13:41:21.288000 audit: BPF prog-id=219 op=LOAD Jan 14 13:41:21.288000 audit[4686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.291000 audit: BPF prog-id=219 op=UNLOAD Jan 14 13:41:21.291000 audit[4686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.293000 audit: BPF prog-id=220 op=LOAD Jan 14 13:41:21.293000 audit[4686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.295000 audit: BPF prog-id=221 op=LOAD Jan 14 13:41:21.295000 audit[4686]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.295000 audit: BPF prog-id=221 op=UNLOAD Jan 14 13:41:21.295000 audit[4686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.295000 audit: BPF prog-id=220 op=UNLOAD Jan 14 13:41:21.295000 audit[4686]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.299000 audit: BPF prog-id=222 op=LOAD Jan 14 13:41:21.299000 audit[4686]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4565 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835643037373132613665613736613632663134633265653535613436 Jan 14 13:41:21.307764 containerd[1601]: time="2026-01-14T13:41:21.307515701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtrzq,Uid:b2d973ed-942e-4bef-9dc2-ff5f578f60a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a\"" Jan 14 13:41:21.316230 kubelet[2830]: E0114 13:41:21.315652 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:21.326500 containerd[1601]: time="2026-01-14T13:41:21.326125237Z" level=info msg="CreateContainer within sandbox \"8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:41:21.432181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172633159.mount: Deactivated successfully. Jan 14 13:41:21.464829 containerd[1601]: time="2026-01-14T13:41:21.464653561Z" level=info msg="StartContainer for \"85d07712a6ea76a62f14c2ee55a46c42d5ac5c96443345006aa190f3731ecc2c\" returns successfully" Jan 14 13:41:21.469111 containerd[1601]: time="2026-01-14T13:41:21.466249811Z" level=info msg="Container 6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:41:21.488104 containerd[1601]: time="2026-01-14T13:41:21.487995121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-7h7cg,Uid:8d93874e-aa0f-4caa-9b42-ab659ab91c41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1d8d8d5db527197d3a06b9345fc947577f5aa7c5ea360bfa6e7cda6263862218\"" Jan 14 13:41:21.492229 containerd[1601]: time="2026-01-14T13:41:21.491972226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:41:21.494473 containerd[1601]: time="2026-01-14T13:41:21.494435005Z" level=info msg="CreateContainer within sandbox \"8baab328ee6bd68441209394fe3aa61c8967db9df73882efad4bfea914e6732a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e\"" Jan 14 13:41:21.495316 containerd[1601]: time="2026-01-14T13:41:21.495285713Z" level=info msg="StartContainer for \"6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e\"" Jan 14 13:41:21.500434 containerd[1601]: time="2026-01-14T13:41:21.500400282Z" level=info msg="connecting to shim 6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e" address="unix:///run/containerd/s/fa311f21ae4515803bc5220d46ba04feb2a1da59181139c53ca4b25a2e9de980" protocol=ttrpc version=3 Jan 14 13:41:21.585401 systemd-networkd[1510]: vxlan.calico: Link UP Jan 14 13:41:21.585438 systemd-networkd[1510]: vxlan.calico: Gained carrier Jan 14 13:41:21.602344 systemd[1]: Started cri-containerd-6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e.scope - libcontainer container 6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e. Jan 14 13:41:21.677000 audit: BPF prog-id=223 op=LOAD Jan 14 13:41:21.678000 audit: BPF prog-id=224 op=LOAD Jan 14 13:41:21.678000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.679000 audit: BPF prog-id=224 op=UNLOAD Jan 14 13:41:21.679000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.680000 audit: BPF prog-id=225 op=LOAD Jan 14 13:41:21.680000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.681000 audit: BPF prog-id=226 op=LOAD Jan 14 13:41:21.681000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.682000 audit: BPF prog-id=226 op=UNLOAD Jan 14 13:41:21.682000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.682000 audit: BPF prog-id=225 op=UNLOAD Jan 14 13:41:21.682000 audit[4795]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.682000 audit: BPF prog-id=227 op=LOAD Jan 14 13:41:21.682000 audit[4795]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4643 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663653864376161623763626130316263343566626563306366633032 Jan 14 13:41:21.708479 kubelet[2830]: E0114 13:41:21.708356 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:21.756000 audit: BPF prog-id=228 op=LOAD Jan 14 13:41:21.756000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef5e2a8a0 a2=98 a3=0 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.756000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.756000 audit: BPF prog-id=228 op=UNLOAD Jan 14 13:41:21.756000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffef5e2a870 a3=0 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.756000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.757000 audit: BPF prog-id=229 op=LOAD Jan 14 13:41:21.757000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef5e2a6b0 a2=94 a3=54428f items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.757000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.757000 audit: BPF prog-id=229 op=UNLOAD Jan 14 13:41:21.757000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffef5e2a6b0 a2=94 a3=54428f items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.757000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.757000 audit: BPF prog-id=230 op=LOAD Jan 14 13:41:21.757000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef5e2a6e0 a2=94 a3=2 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.757000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.757000 audit: BPF prog-id=230 op=UNLOAD Jan 14 13:41:21.757000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffef5e2a6e0 a2=0 a3=2 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.757000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.757000 audit: BPF prog-id=231 op=LOAD Jan 14 13:41:21.757000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef5e2a490 a2=94 a3=4 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.757000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.759000 audit: BPF prog-id=231 op=UNLOAD Jan 14 13:41:21.759000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef5e2a490 a2=94 a3=4 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.759000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.759000 audit: BPF prog-id=232 op=LOAD Jan 14 13:41:21.759000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef5e2a590 a2=94 a3=7ffef5e2a710 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.759000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.759000 audit: BPF prog-id=232 op=UNLOAD Jan 14 13:41:21.759000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef5e2a590 a2=0 a3=7ffef5e2a710 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.759000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.764000 audit: BPF prog-id=233 op=LOAD Jan 14 13:41:21.764000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef5e29cc0 a2=94 a3=2 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.764000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.764000 audit: BPF prog-id=233 op=UNLOAD Jan 14 13:41:21.764000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef5e29cc0 a2=0 a3=2 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.764000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.764000 audit: BPF prog-id=234 op=LOAD Jan 14 13:41:21.764000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef5e29dc0 a2=94 a3=30 items=0 ppid=4432 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.764000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:41:21.775000 audit: BPF prog-id=235 op=LOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb6e7a2a0 a2=98 a3=0 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.775000 audit: BPF prog-id=235 op=UNLOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb6e7a270 a3=0 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.775000 audit: BPF prog-id=236 op=LOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb6e7a090 a2=94 a3=54428f items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.775000 audit: BPF prog-id=236 op=UNLOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb6e7a090 a2=94 a3=54428f items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.775000 audit: BPF prog-id=237 op=LOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb6e7a0c0 a2=94 a3=2 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.775000 audit: BPF prog-id=237 op=UNLOAD Jan 14 13:41:21.775000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb6e7a0c0 a2=0 a3=2 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:21.778455 containerd[1601]: time="2026-01-14T13:41:21.762985872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:21.790896 containerd[1601]: time="2026-01-14T13:41:21.790586083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:41:21.790896 containerd[1601]: time="2026-01-14T13:41:21.790836131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:21.791756 kubelet[2830]: E0114 13:41:21.791604 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:21.793409 kubelet[2830]: E0114 13:41:21.792947 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:21.796272 kubelet[2830]: E0114 13:41:21.794648 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrk54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:21.796931 kubelet[2830]: E0114 13:41:21.796778 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:41:21.812000 audit[4853]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:21.812000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb28a5600 a2=0 a3=7ffdb28a55ec items=0 ppid=2969 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:21.820000 audit[4853]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:21.820000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb28a5600 a2=0 a3=0 items=0 ppid=2969 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:21.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:21.823459 systemd-networkd[1510]: cali00cb75bfe3e: Link UP Jan 14 13:41:21.826748 systemd-networkd[1510]: cali00cb75bfe3e: Gained carrier Jan 14 13:41:21.835480 containerd[1601]: time="2026-01-14T13:41:21.835417060Z" level=info msg="StartContainer for \"6ce8d7aab7cba01bc45fbec0cfc0227677b30067f03595a94b6190406399ed0e\" returns successfully" Jan 14 13:41:21.897084 kubelet[2830]: I0114 13:41:21.897013 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h76mq" podStartSLOduration=45.896987006 podStartE2EDuration="45.896987006s" podCreationTimestamp="2026-01-14 13:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:41:21.772041862 +0000 UTC m=+48.832612299" watchObservedRunningTime="2026-01-14 13:41:21.896987006 +0000 UTC m=+48.957557463" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.480 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0 calico-apiserver-7d8b4854- calico-apiserver cf16b57f-d6ab-45f8-acf0-0156a11bd169 881 0 2026-01-14 13:40:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d8b4854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d8b4854-wgpjs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00cb75bfe3e [] [] }} ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.482 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.590 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" HandleID="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Workload="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.591 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" HandleID="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Workload="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d8b4854-wgpjs", "timestamp":"2026-01-14 13:41:21.590410998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.591 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.591 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.591 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.605 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.621 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.657 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.674 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.686 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.686 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.695 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.726 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.758 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.762 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" host="localhost" Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.763 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:21.904285 containerd[1601]: 2026-01-14 13:41:21.764 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" HandleID="k8s-pod-network.fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Workload="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.791 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0", GenerateName:"calico-apiserver-7d8b4854-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf16b57f-d6ab-45f8-acf0-0156a11bd169", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b4854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d8b4854-wgpjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00cb75bfe3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.792 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.792 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00cb75bfe3e ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.831 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.837 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0", GenerateName:"calico-apiserver-7d8b4854-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf16b57f-d6ab-45f8-acf0-0156a11bd169", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b4854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b", Pod:"calico-apiserver-7d8b4854-wgpjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00cb75bfe3e", MAC:"ae:c6:c5:f6:ba:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:21.912826 containerd[1601]: 2026-01-14 13:41:21.895 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b4854-wgpjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d8b4854--wgpjs-eth0" Jan 14 13:41:22.058645 systemd-networkd[1510]: cali270d9f0f5eb: Link UP Jan 14 13:41:22.060928 systemd-networkd[1510]: cali270d9f0f5eb: Gained carrier Jan 14 13:41:22.074207 containerd[1601]: time="2026-01-14T13:41:22.074117624Z" level=info msg="connecting to shim fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b" address="unix:///run/containerd/s/f80112819818ffc2cbf3562097dec7fc668e202971efa65ea5c118dba3fde812" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.407 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--26r2w-eth0 goldmane-666569f655- calico-system d3d1f008-c373-460c-bb65-2604d6d39838 883 0 2026-01-14 13:40:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-26r2w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali270d9f0f5eb [] [] }} ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.407 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.665 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" HandleID="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Workload="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.666 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" HandleID="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Workload="localhost-k8s-goldmane--666569f655--26r2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000194f90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-26r2w", "timestamp":"2026-01-14 13:41:21.665980354 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.666 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.768 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.769 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.823 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.874 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.903 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.914 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.925 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.926 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.929 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:21.991 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:22.019 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:22.021 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" host="localhost" Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:22.021 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:41:22.095775 containerd[1601]: 2026-01-14 13:41:22.021 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" HandleID="k8s-pod-network.d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Workload="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.032 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--26r2w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3d1f008-c373-460c-bb65-2604d6d39838", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-26r2w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali270d9f0f5eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.035 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.035 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali270d9f0f5eb ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.051 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.063 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--26r2w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d3d1f008-c373-460c-bb65-2604d6d39838", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 40, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef", Pod:"goldmane-666569f655-26r2w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali270d9f0f5eb", MAC:"3a:d7:1a:5c:42:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:41:22.097179 containerd[1601]: 2026-01-14 13:41:22.089 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" Namespace="calico-system" Pod="goldmane-666569f655-26r2w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--26r2w-eth0" Jan 14 13:41:22.136592 systemd[1]: Started cri-containerd-fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b.scope - libcontainer container fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b. Jan 14 13:41:22.167214 containerd[1601]: time="2026-01-14T13:41:22.167046519Z" level=info msg="connecting to shim d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef" address="unix:///run/containerd/s/bc7e568d2a006e63cc998eb47fc9e3229e2c1285a4d8bbf87acac5f74466fcd2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:41:22.200000 audit: BPF prog-id=238 op=LOAD Jan 14 13:41:22.202000 audit: BPF prog-id=239 op=LOAD Jan 14 13:41:22.202000 audit[4899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.202000 audit: BPF prog-id=239 op=UNLOAD Jan 14 13:41:22.202000 audit[4899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.202000 audit: BPF prog-id=240 op=LOAD Jan 14 13:41:22.202000 audit[4899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.202000 audit: BPF prog-id=241 op=LOAD Jan 14 13:41:22.202000 audit[4899]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.203000 audit: BPF prog-id=241 op=UNLOAD Jan 14 13:41:22.203000 audit[4899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.203000 audit: BPF prog-id=240 op=UNLOAD Jan 14 13:41:22.203000 audit[4899]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.203000 audit: BPF prog-id=242 op=LOAD Jan 14 13:41:22.203000 audit[4899]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4887 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663393531303961666539303730393834393064376634636536313366 Jan 14 13:41:22.207845 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:22.236198 systemd[1]: Started cri-containerd-d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef.scope - libcontainer container d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef. Jan 14 13:41:22.290739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338530757.mount: Deactivated successfully. Jan 14 13:41:22.307000 audit: BPF prog-id=243 op=LOAD Jan 14 13:41:22.309000 audit: BPF prog-id=244 op=LOAD Jan 14 13:41:22.309000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.310000 audit: BPF prog-id=244 op=UNLOAD Jan 14 13:41:22.310000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.312000 audit: BPF prog-id=245 op=LOAD Jan 14 13:41:22.312000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.314000 audit: BPF prog-id=246 op=LOAD Jan 14 13:41:22.314000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.316000 audit: BPF prog-id=246 op=UNLOAD Jan 14 13:41:22.316000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.316000 audit: BPF prog-id=245 op=UNLOAD Jan 14 13:41:22.316000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.317000 audit: BPF prog-id=247 op=LOAD Jan 14 13:41:22.317000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4926 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437356138636439636366313866323631306330373663336330316631 Jan 14 13:41:22.321935 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:41:22.329000 audit: BPF prog-id=248 op=LOAD Jan 14 13:41:22.329000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffb6e79f80 a2=94 a3=1 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.329000 audit: BPF prog-id=248 op=UNLOAD Jan 14 13:41:22.329000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffb6e79f80 a2=94 a3=1 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.329000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.344000 audit: BPF prog-id=249 op=LOAD Jan 14 13:41:22.344000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb6e79f70 a2=94 a3=4 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.346000 audit: BPF prog-id=249 op=UNLOAD Jan 14 13:41:22.346000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffb6e79f70 a2=0 a3=4 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.346000 audit: BPF prog-id=250 op=LOAD Jan 14 13:41:22.346000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffb6e79dd0 a2=94 a3=5 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.346000 audit: BPF prog-id=250 op=UNLOAD Jan 14 13:41:22.346000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffb6e79dd0 a2=0 a3=5 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.346000 audit: BPF prog-id=251 op=LOAD Jan 14 13:41:22.346000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb6e79ff0 a2=94 a3=6 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.347000 audit: BPF prog-id=251 op=UNLOAD Jan 14 13:41:22.347000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffb6e79ff0 a2=0 a3=6 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.347000 audit: BPF prog-id=252 op=LOAD Jan 14 13:41:22.347000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffb6e797a0 a2=94 a3=88 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.348000 audit: BPF prog-id=253 op=LOAD Jan 14 13:41:22.348000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffb6e79620 a2=94 a3=2 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.348000 audit: BPF prog-id=253 op=UNLOAD Jan 14 13:41:22.348000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffb6e79650 a2=0 a3=7fffb6e79750 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.349000 audit: BPF prog-id=252 op=UNLOAD Jan 14 13:41:22.349000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=31502d10 a2=0 a3=30648159a9d9738 items=0 ppid=4432 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:41:22.367000 audit: BPF prog-id=234 op=UNLOAD Jan 14 13:41:22.367000 audit[4432]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000e484c0 a2=0 a3=0 items=0 ppid=4061 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.367000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 13:41:22.387254 containerd[1601]: time="2026-01-14T13:41:22.386945848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b4854-wgpjs,Uid:cf16b57f-d6ab-45f8-acf0-0156a11bd169,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fc95109afe907098490d7f4ce613f987d6aaf9d721778be8d17713b229e9243b\"" Jan 14 13:41:22.394242 containerd[1601]: time="2026-01-14T13:41:22.394194104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:41:22.408974 containerd[1601]: time="2026-01-14T13:41:22.408906131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26r2w,Uid:d3d1f008-c373-460c-bb65-2604d6d39838,Namespace:calico-system,Attempt:0,} returns sandbox id \"d75a8cd9ccf18f2610c076c3c01f171c82b9d6c3e8511bf2480e384844f852ef\"" Jan 14 13:41:22.457551 containerd[1601]: time="2026-01-14T13:41:22.457315783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:22.462011 containerd[1601]: time="2026-01-14T13:41:22.461883975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:22.462011 containerd[1601]: time="2026-01-14T13:41:22.461989221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:41:22.462506 kubelet[2830]: E0114 13:41:22.462439 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:22.465844 kubelet[2830]: E0114 13:41:22.463798 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:22.465844 kubelet[2830]: E0114 13:41:22.464203 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:22.466778 containerd[1601]: time="2026-01-14T13:41:22.466401257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:41:22.467074 kubelet[2830]: E0114 13:41:22.467034 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:22.508000 audit[5005]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=5005 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:41:22.508000 audit[5005]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffee3e88ed0 a2=0 a3=7ffee3e88ebc items=0 ppid=4432 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.508000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:41:22.512000 audit[5007]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:41:22.512000 audit[5007]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe72bf12e0 a2=0 a3=7ffe72bf12cc items=0 ppid=4432 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.512000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:41:22.520000 audit[5003]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:41:22.520000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe35b5e170 a2=0 a3=7ffe35b5e15c items=0 ppid=4432 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.520000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:41:22.567331 containerd[1601]: time="2026-01-14T13:41:22.567186676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:22.523000 audit[5004]: NETFILTER_CFG table=filter:126 family=2 entries=250 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:41:22.523000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=146496 a0=3 a1=7ffc510d12f0 a2=0 a3=7ffc510d12dc items=0 ppid=4432 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:41:22.569431 containerd[1601]: time="2026-01-14T13:41:22.569198204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:41:22.569431 containerd[1601]: time="2026-01-14T13:41:22.569363403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:22.569882 kubelet[2830]: E0114 13:41:22.569785 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:41:22.570010 kubelet[2830]: E0114 13:41:22.569899 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:41:22.570113 kubelet[2830]: E0114 13:41:22.570037 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62dg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:22.572615 kubelet[2830]: E0114 13:41:22.571525 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:22.611000 audit[5018]: NETFILTER_CFG table=filter:127 family=2 entries=99 op=nft_register_chain pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:41:22.611000 audit[5018]: SYSCALL arch=c000003e syscall=46 success=yes exit=52960 a0=3 a1=7ffd09f009c0 a2=0 a3=7ffd09f009ac items=0 ppid=4432 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.611000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:41:22.625995 systemd-networkd[1510]: caliebfd9d475a1: Gained IPv6LL Jan 14 13:41:22.713957 kubelet[2830]: E0114 13:41:22.713793 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:22.718906 kubelet[2830]: E0114 13:41:22.718786 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:22.723245 kubelet[2830]: E0114 13:41:22.722614 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:22.723245 kubelet[2830]: E0114 13:41:22.723121 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:22.723972 kubelet[2830]: E0114 13:41:22.723897 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:41:22.778000 audit[5020]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:22.781997 kubelet[2830]: I0114 13:41:22.781480 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xtrzq" podStartSLOduration=46.78145726 podStartE2EDuration="46.78145726s" podCreationTimestamp="2026-01-14 13:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:41:22.780862076 +0000 UTC m=+49.841432513" watchObservedRunningTime="2026-01-14 13:41:22.78145726 +0000 UTC m=+49.842027707" Jan 14 13:41:22.778000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf454f330 a2=0 a3=7ffdf454f31c items=0 ppid=2969 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:22.795000 audit[5020]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:22.795000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf454f330 a2=0 a3=0 items=0 ppid=2969 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:22.817064 systemd-networkd[1510]: califc0b9d74573: Gained IPv6LL Jan 14 13:41:22.818026 systemd-networkd[1510]: calid6303e95eb7: Gained IPv6LL Jan 14 13:41:22.829000 audit[5022]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:22.829000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde273c110 a2=0 a3=7ffde273c0fc items=0 ppid=2969 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:22.868000 audit[5022]: NETFILTER_CFG table=nat:131 family=2 entries=47 op=nft_register_chain pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:22.868000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffde273c110 a2=0 a3=7ffde273c0fc items=0 ppid=2969 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:22.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:23.142374 systemd-networkd[1510]: cali270d9f0f5eb: Gained IPv6LL Jan 14 13:41:23.392997 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Jan 14 13:41:23.727156 kubelet[2830]: E0114 13:41:23.726128 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:23.727156 kubelet[2830]: E0114 13:41:23.726256 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:23.727156 kubelet[2830]: E0114 13:41:23.727011 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:23.729821 kubelet[2830]: E0114 13:41:23.729777 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:23.841284 systemd-networkd[1510]: cali00cb75bfe3e: Gained IPv6LL Jan 14 13:41:23.903000 audit[5027]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:23.903000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde21184a0 a2=0 a3=7ffde211848c items=0 ppid=2969 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:23.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:23.913000 audit[5027]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:41:23.913000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffde21184a0 a2=0 a3=7ffde211848c items=0 ppid=2969 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:23.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:41:24.728050 kubelet[2830]: E0114 13:41:24.727993 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:24.728556 kubelet[2830]: E0114 13:41:24.728079 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:31.252807 containerd[1601]: time="2026-01-14T13:41:31.252188435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:41:31.364183 containerd[1601]: time="2026-01-14T13:41:31.364045211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:31.366347 containerd[1601]: time="2026-01-14T13:41:31.366234529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:41:31.366559 containerd[1601]: time="2026-01-14T13:41:31.366289946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:31.366926 kubelet[2830]: E0114 13:41:31.366813 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:31.366926 kubelet[2830]: E0114 13:41:31.366919 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:31.367469 containerd[1601]: time="2026-01-14T13:41:31.367411468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:41:31.367563 kubelet[2830]: E0114 13:41:31.367401 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:31.369332 kubelet[2830]: E0114 13:41:31.369202 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:31.435661 containerd[1601]: time="2026-01-14T13:41:31.435414007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:31.454661 containerd[1601]: time="2026-01-14T13:41:31.454429369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:31.454661 containerd[1601]: time="2026-01-14T13:41:31.454576319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:41:31.455257 kubelet[2830]: E0114 13:41:31.455168 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:31.455455 kubelet[2830]: E0114 13:41:31.455258 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:31.455507 kubelet[2830]: E0114 13:41:31.455448 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e68290aaef264f96a3b305ab225972cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:31.458142 containerd[1601]: time="2026-01-14T13:41:31.457919993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:41:31.528093 containerd[1601]: time="2026-01-14T13:41:31.527840916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:31.529910 containerd[1601]: time="2026-01-14T13:41:31.529777831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:41:31.529910 containerd[1601]: time="2026-01-14T13:41:31.529850938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:31.530144 kubelet[2830]: E0114 13:41:31.530096 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:31.530251 kubelet[2830]: E0114 13:41:31.530160 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:31.532829 kubelet[2830]: E0114 13:41:31.530900 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:31.538260 kubelet[2830]: E0114 13:41:31.538173 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:33.250889 containerd[1601]: time="2026-01-14T13:41:33.250827417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:41:33.322460 containerd[1601]: time="2026-01-14T13:41:33.322391219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:33.323924 containerd[1601]: time="2026-01-14T13:41:33.323844643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:41:33.324005 containerd[1601]: time="2026-01-14T13:41:33.323892908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:33.324169 kubelet[2830]: E0114 13:41:33.324111 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:33.324562 kubelet[2830]: E0114 13:41:33.324170 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:33.324562 kubelet[2830]: E0114 13:41:33.324297 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:33.326637 containerd[1601]: time="2026-01-14T13:41:33.326563254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:41:33.399504 containerd[1601]: time="2026-01-14T13:41:33.399391210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:33.401245 containerd[1601]: time="2026-01-14T13:41:33.401161590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:41:33.401297 containerd[1601]: time="2026-01-14T13:41:33.401236581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:33.401461 kubelet[2830]: E0114 13:41:33.401404 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:33.401506 kubelet[2830]: E0114 13:41:33.401464 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:33.401708 kubelet[2830]: E0114 13:41:33.401588 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:33.403051 kubelet[2830]: E0114 13:41:33.402953 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:34.253044 containerd[1601]: time="2026-01-14T13:41:34.252938581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:41:34.328340 containerd[1601]: time="2026-01-14T13:41:34.328235820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:34.330300 containerd[1601]: time="2026-01-14T13:41:34.330253175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:41:34.330454 containerd[1601]: time="2026-01-14T13:41:34.330369896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:34.330765 kubelet[2830]: E0114 13:41:34.330603 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:41:34.331144 kubelet[2830]: E0114 13:41:34.330774 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:41:34.331144 kubelet[2830]: E0114 13:41:34.330944 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62dg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:34.332236 kubelet[2830]: E0114 13:41:34.332136 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:35.251971 containerd[1601]: time="2026-01-14T13:41:35.251783425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:41:35.313968 containerd[1601]: time="2026-01-14T13:41:35.313877158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:35.315613 containerd[1601]: time="2026-01-14T13:41:35.315494324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:41:35.315613 containerd[1601]: time="2026-01-14T13:41:35.315596555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:35.315836 kubelet[2830]: E0114 13:41:35.315785 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:35.315836 kubelet[2830]: E0114 13:41:35.315825 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:35.315960 kubelet[2830]: E0114 13:41:35.315926 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:35.317449 kubelet[2830]: E0114 13:41:35.317396 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:37.259562 containerd[1601]: time="2026-01-14T13:41:37.259480946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:41:37.329957 containerd[1601]: time="2026-01-14T13:41:37.329886280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:37.331657 containerd[1601]: time="2026-01-14T13:41:37.331587718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:41:37.331657 containerd[1601]: time="2026-01-14T13:41:37.331735442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:37.332200 kubelet[2830]: E0114 13:41:37.332064 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:37.332200 kubelet[2830]: E0114 13:41:37.332159 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:37.333231 kubelet[2830]: E0114 13:41:37.332351 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrk54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:37.334578 kubelet[2830]: E0114 13:41:37.334413 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:41:40.251110 kubelet[2830]: E0114 13:41:40.251030 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:40.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:55806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:40.621470 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:55806.service - OpenSSH per-connection server daemon (10.0.0.1:55806). Jan 14 13:41:40.623876 kernel: kauditd_printk_skb: 333 callbacks suppressed Jan 14 13:41:40.623963 kernel: audit: type=1130 audit(1768398100.620:723): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:55806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:40.788000 audit[5052]: USER_ACCT pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.789440 sshd[5052]: Accepted publickey for core from 10.0.0.1 port 55806 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:41:40.793263 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:41:40.788000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.804541 systemd-logind[1577]: New session 11 of user core. Jan 14 13:41:40.808203 kernel: audit: type=1101 audit(1768398100.788:724): pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.808285 kernel: audit: type=1103 audit(1768398100.788:725): pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.814032 kernel: audit: type=1006 audit(1768398100.788:726): pid=5052 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 13:41:40.814167 kernel: audit: type=1300 audit(1768398100.788:726): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb57bd460 a2=3 a3=0 items=0 ppid=1 pid=5052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:40.788000 audit[5052]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb57bd460 a2=3 a3=0 items=0 ppid=1 pid=5052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:40.788000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:40.827652 kernel: audit: type=1327 audit(1768398100.788:726): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:40.839012 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:41:40.874106 kernel: audit: type=1105 audit(1768398100.861:727): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.861000 audit[5052]: USER_START pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.862000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:40.883924 kernel: audit: type=1103 audit(1768398100.862:728): pid=5056 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:41.064405 sshd[5056]: Connection closed by 10.0.0.1 port 55806 Jan 14 13:41:41.064953 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 14 13:41:41.066000 audit[5052]: USER_END pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:41.071409 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:55806.service: Deactivated successfully. Jan 14 13:41:41.074577 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:41:41.076970 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:41:41.066000 audit[5052]: CRED_DISP pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:41.078595 systemd-logind[1577]: Removed session 11. Jan 14 13:41:41.084306 kernel: audit: type=1106 audit(1768398101.066:729): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:41.084386 kernel: audit: type=1104 audit(1768398101.066:730): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:41.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.106:22-10.0.0.1:55806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:42.250095 kubelet[2830]: E0114 13:41:42.250009 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:42.251076 kubelet[2830]: E0114 13:41:42.250762 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:46.082996 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:56658.service - OpenSSH per-connection server daemon (10.0.0.1:56658). Jan 14 13:41:46.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:56658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:46.085100 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:41:46.085242 kernel: audit: type=1130 audit(1768398106.082:732): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:56658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:46.168000 audit[5087]: USER_ACCT pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.169583 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 56658 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:41:46.172377 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:41:46.170000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.180632 systemd-logind[1577]: New session 12 of user core. Jan 14 13:41:46.189180 kernel: audit: type=1101 audit(1768398106.168:733): pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.189282 kernel: audit: type=1103 audit(1768398106.170:734): pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.189335 kernel: audit: type=1006 audit(1768398106.170:735): pid=5087 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 13:41:46.197465 kernel: audit: type=1300 audit(1768398106.170:735): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c7ccd90 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:46.170000 audit[5087]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c7ccd90 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:46.210808 kernel: audit: type=1327 audit(1768398106.170:735): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:46.170000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:46.217092 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:41:46.220000 audit[5087]: USER_START pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.224000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.250819 kernel: audit: type=1105 audit(1768398106.220:736): pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.250928 kernel: audit: type=1103 audit(1768398106.224:737): pid=5091 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.252053 kubelet[2830]: E0114 13:41:46.251999 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:46.336917 sshd[5091]: Connection closed by 10.0.0.1 port 56658 Jan 14 13:41:46.338244 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Jan 14 13:41:46.340000 audit[5087]: USER_END pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.345912 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:56658.service: Deactivated successfully. Jan 14 13:41:46.348934 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:41:46.350389 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:41:46.352835 systemd-logind[1577]: Removed session 12. Jan 14 13:41:46.340000 audit[5087]: CRED_DISP pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.364992 kernel: audit: type=1106 audit(1768398106.340:738): pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.365112 kernel: audit: type=1104 audit(1768398106.340:739): pid=5087 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:46.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.106:22-10.0.0.1:56658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:47.252495 kubelet[2830]: E0114 13:41:47.252414 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:41:47.660790 kubelet[2830]: E0114 13:41:47.660402 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:41:49.252151 kubelet[2830]: E0114 13:41:49.252011 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:41:51.253158 kubelet[2830]: E0114 13:41:51.253104 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:41:51.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:56674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:51.359272 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:56674.service - OpenSSH per-connection server daemon (10.0.0.1:56674). Jan 14 13:41:51.362136 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:41:51.362223 kernel: audit: type=1130 audit(1768398111.358:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:56674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:51.446000 audit[5133]: USER_ACCT pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.447973 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 56674 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:41:51.451325 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:41:51.448000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.459772 systemd-logind[1577]: New session 13 of user core. Jan 14 13:41:51.468136 kernel: audit: type=1101 audit(1768398111.446:742): pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.468239 kernel: audit: type=1103 audit(1768398111.448:743): pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.468292 kernel: audit: type=1006 audit(1768398111.448:744): pid=5133 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 13:41:51.474018 kernel: audit: type=1300 audit(1768398111.448:744): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe409215f0 a2=3 a3=0 items=0 ppid=1 pid=5133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:51.448000 audit[5133]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe409215f0 a2=3 a3=0 items=0 ppid=1 pid=5133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:51.448000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:51.487067 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:41:51.490883 kernel: audit: type=1327 audit(1768398111.448:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:51.492000 audit[5133]: USER_START pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.492000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.520277 kernel: audit: type=1105 audit(1768398111.492:745): pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.520407 kernel: audit: type=1103 audit(1768398111.492:746): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.622824 sshd[5137]: Connection closed by 10.0.0.1 port 56674 Jan 14 13:41:51.623331 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Jan 14 13:41:51.625000 audit[5133]: USER_END pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.629883 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:56674.service: Deactivated successfully. Jan 14 13:41:51.633015 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:41:51.635654 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:41:51.638949 systemd-logind[1577]: Removed session 13. Jan 14 13:41:51.625000 audit[5133]: CRED_DISP pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.648971 kernel: audit: type=1106 audit(1768398111.625:747): pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.649117 kernel: audit: type=1104 audit(1768398111.625:748): pid=5133 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:51.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.106:22-10.0.0.1:56674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:54.263836 containerd[1601]: time="2026-01-14T13:41:54.263635469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:41:54.351315 containerd[1601]: time="2026-01-14T13:41:54.351253193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:54.361497 containerd[1601]: time="2026-01-14T13:41:54.361356808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:41:54.361834 kubelet[2830]: E0114 13:41:54.361756 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:54.361834 kubelet[2830]: E0114 13:41:54.361820 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:41:54.362378 kubelet[2830]: E0114 13:41:54.361960 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e68290aaef264f96a3b305ab225972cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:54.362537 containerd[1601]: time="2026-01-14T13:41:54.361414373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:54.365808 containerd[1601]: time="2026-01-14T13:41:54.365758185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:41:54.428990 containerd[1601]: time="2026-01-14T13:41:54.428891774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:54.431734 containerd[1601]: time="2026-01-14T13:41:54.430636702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:41:54.431734 containerd[1601]: time="2026-01-14T13:41:54.430875368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:54.432208 kubelet[2830]: E0114 13:41:54.432121 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:54.432258 kubelet[2830]: E0114 13:41:54.432206 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:41:54.432351 kubelet[2830]: E0114 13:41:54.432323 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:54.436840 kubelet[2830]: E0114 13:41:54.434563 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:41:56.252286 containerd[1601]: time="2026-01-14T13:41:56.252182518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:41:56.339221 containerd[1601]: time="2026-01-14T13:41:56.339022176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:56.343566 containerd[1601]: time="2026-01-14T13:41:56.342264190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:41:56.343566 containerd[1601]: time="2026-01-14T13:41:56.342356231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:56.344128 kubelet[2830]: E0114 13:41:56.343077 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:56.344128 kubelet[2830]: E0114 13:41:56.343129 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:41:56.344128 kubelet[2830]: E0114 13:41:56.343254 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:56.345619 kubelet[2830]: E0114 13:41:56.345445 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:41:56.644329 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Jan 14 13:41:56.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:54244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:56.646905 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:41:56.647023 kernel: audit: type=1130 audit(1768398116.643:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:54244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:56.751000 audit[5152]: USER_ACCT pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.752611 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:41:56.755145 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:41:56.753000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.762014 systemd-logind[1577]: New session 14 of user core. Jan 14 13:41:56.769070 kernel: audit: type=1101 audit(1768398116.751:751): pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.769129 kernel: audit: type=1103 audit(1768398116.753:752): pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.769170 kernel: audit: type=1006 audit(1768398116.753:753): pid=5152 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 13:41:56.753000 audit[5152]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc7aadb70 a2=3 a3=0 items=0 ppid=1 pid=5152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:56.784390 kernel: audit: type=1300 audit(1768398116.753:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc7aadb70 a2=3 a3=0 items=0 ppid=1 pid=5152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:41:56.784449 kernel: audit: type=1327 audit(1768398116.753:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:56.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:41:56.784996 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:41:56.789000 audit[5152]: USER_START pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.792000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.808300 kernel: audit: type=1105 audit(1768398116.789:754): pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.808368 kernel: audit: type=1103 audit(1768398116.792:755): pid=5156 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.926160 sshd[5156]: Connection closed by 10.0.0.1 port 54244 Jan 14 13:41:56.927946 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Jan 14 13:41:56.929000 audit[5152]: USER_END pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.935312 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:54244.service: Deactivated successfully. Jan 14 13:41:56.940604 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:41:56.945447 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:41:56.949164 systemd-logind[1577]: Removed session 14. Jan 14 13:41:56.929000 audit[5152]: CRED_DISP pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.959006 kernel: audit: type=1106 audit(1768398116.929:756): pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.959088 kernel: audit: type=1104 audit(1768398116.929:757): pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:41:56.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.106:22-10.0.0.1:54244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:41:59.251179 containerd[1601]: time="2026-01-14T13:41:59.250751936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:41:59.350375 containerd[1601]: time="2026-01-14T13:41:59.350253075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:59.352192 containerd[1601]: time="2026-01-14T13:41:59.352072633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:41:59.352192 containerd[1601]: time="2026-01-14T13:41:59.352126663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:59.352432 kubelet[2830]: E0114 13:41:59.352353 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:59.352432 kubelet[2830]: E0114 13:41:59.352419 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:41:59.353187 kubelet[2830]: E0114 13:41:59.352851 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:59.353356 containerd[1601]: time="2026-01-14T13:41:59.353036354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:41:59.354837 kubelet[2830]: E0114 13:41:59.354761 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:41:59.420567 containerd[1601]: time="2026-01-14T13:41:59.420465617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:59.422380 containerd[1601]: time="2026-01-14T13:41:59.422275662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:41:59.422494 containerd[1601]: time="2026-01-14T13:41:59.422380709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:59.422828 kubelet[2830]: E0114 13:41:59.422731 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:59.422916 kubelet[2830]: E0114 13:41:59.422827 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:41:59.423091 kubelet[2830]: E0114 13:41:59.422985 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:59.425394 containerd[1601]: time="2026-01-14T13:41:59.425252386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:41:59.511245 containerd[1601]: time="2026-01-14T13:41:59.511016302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:41:59.513206 containerd[1601]: time="2026-01-14T13:41:59.513143327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:41:59.513322 containerd[1601]: time="2026-01-14T13:41:59.513236254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:41:59.513582 kubelet[2830]: E0114 13:41:59.513510 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:59.513648 kubelet[2830]: E0114 13:41:59.513596 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:41:59.514803 kubelet[2830]: E0114 13:41:59.513832 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:41:59.515296 kubelet[2830]: E0114 13:41:59.515240 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:42:00.250652 containerd[1601]: time="2026-01-14T13:42:00.250557718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:42:00.324443 containerd[1601]: time="2026-01-14T13:42:00.324370321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:00.326017 containerd[1601]: time="2026-01-14T13:42:00.325956022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:42:00.326161 containerd[1601]: time="2026-01-14T13:42:00.326061432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:00.326278 kubelet[2830]: E0114 13:42:00.326210 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:42:00.326340 kubelet[2830]: E0114 13:42:00.326277 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:42:00.326468 kubelet[2830]: E0114 13:42:00.326408 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62dg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:00.327820 kubelet[2830]: E0114 13:42:00.327755 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:42:01.945942 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). Jan 14 13:42:01.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:54258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:01.948766 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:01.948849 kernel: audit: type=1130 audit(1768398121.945:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:54258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.032000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.034062 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:02.038639 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:02.046086 systemd-logind[1577]: New session 15 of user core. Jan 14 13:42:02.036000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.058298 kernel: audit: type=1101 audit(1768398122.032:760): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.058354 kernel: audit: type=1103 audit(1768398122.036:761): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.058393 kernel: audit: type=1006 audit(1768398122.036:762): pid=5170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 13:42:02.036000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa74e2f80 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:02.065948 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:42:02.076909 kernel: audit: type=1300 audit(1768398122.036:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa74e2f80 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:02.036000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:02.081742 kernel: audit: type=1327 audit(1768398122.036:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:02.081791 kernel: audit: type=1105 audit(1768398122.069:763): pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.069000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.071000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.102787 kernel: audit: type=1103 audit(1768398122.071:764): pid=5174 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.157465 sshd[5174]: Connection closed by 10.0.0.1 port 54258 Jan 14 13:42:02.158033 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:02.159000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.159000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.176142 kernel: audit: type=1106 audit(1768398122.159:765): pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.176221 kernel: audit: type=1104 audit(1768398122.159:766): pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.180826 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:54258.service: Deactivated successfully. Jan 14 13:42:02.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.106:22-10.0.0.1:54258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.182953 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:42:02.184080 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:42:02.187535 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:54268.service - OpenSSH per-connection server daemon (10.0.0.1:54268). Jan 14 13:42:02.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.106:22-10.0.0.1:54268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.188599 systemd-logind[1577]: Removed session 15. Jan 14 13:42:02.281000 audit[5189]: USER_ACCT pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.282469 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:02.283000 audit[5189]: CRED_ACQ pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.283000 audit[5189]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc14fe620 a2=3 a3=0 items=0 ppid=1 pid=5189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:02.283000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:02.286250 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:02.295166 systemd-logind[1577]: New session 16 of user core. Jan 14 13:42:02.303982 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:42:02.307000 audit[5189]: USER_START pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.310000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.447241 sshd[5194]: Connection closed by 10.0.0.1 port 54268 Jan 14 13:42:02.451002 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:02.453000 audit[5189]: USER_END pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.454000 audit[5189]: CRED_DISP pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.467577 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:54268.service: Deactivated successfully. Jan 14 13:42:02.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.106:22-10.0.0.1:54268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.472220 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:42:02.476812 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:42:02.483418 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:49024.service - OpenSSH per-connection server daemon (10.0.0.1:49024). Jan 14 13:42:02.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.106:22-10.0.0.1:49024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.487530 systemd-logind[1577]: Removed session 16. Jan 14 13:42:02.556000 audit[5206]: USER_ACCT pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.557513 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 49024 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:02.558000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.558000 audit[5206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7478a780 a2=3 a3=0 items=0 ppid=1 pid=5206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:02.558000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:02.560513 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:02.567648 systemd-logind[1577]: New session 17 of user core. Jan 14 13:42:02.582078 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:42:02.585000 audit[5206]: USER_START pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.588000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.683889 sshd[5216]: Connection closed by 10.0.0.1 port 49024 Jan 14 13:42:02.684467 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:02.686000 audit[5206]: USER_END pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.686000 audit[5206]: CRED_DISP pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:02.691203 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:49024.service: Deactivated successfully. Jan 14 13:42:02.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.106:22-10.0.0.1:49024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:02.694127 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:42:02.696071 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:42:02.698156 systemd-logind[1577]: Removed session 17. Jan 14 13:42:04.250155 kubelet[2830]: E0114 13:42:04.250077 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:05.251038 containerd[1601]: time="2026-01-14T13:42:05.250985974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:42:05.327838 containerd[1601]: time="2026-01-14T13:42:05.327495275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:05.329322 containerd[1601]: time="2026-01-14T13:42:05.329265509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:42:05.329823 containerd[1601]: time="2026-01-14T13:42:05.329334520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:05.330001 kubelet[2830]: E0114 13:42:05.329926 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:05.330479 kubelet[2830]: E0114 13:42:05.330019 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:05.330479 kubelet[2830]: E0114 13:42:05.330188 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrk54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:05.331742 kubelet[2830]: E0114 13:42:05.331567 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:42:06.251744 kubelet[2830]: E0114 13:42:06.251580 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:42:07.263659 kubelet[2830]: E0114 13:42:07.249438 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:07.705006 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Jan 14 13:42:07.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:49028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:07.707447 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 13:42:07.707570 kernel: audit: type=1130 audit(1768398127.704:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:49028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:07.781000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.782575 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:07.785757 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:07.783000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.793183 systemd-logind[1577]: New session 18 of user core. Jan 14 13:42:07.798333 kernel: audit: type=1101 audit(1768398127.781:787): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.798401 kernel: audit: type=1103 audit(1768398127.783:788): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.798451 kernel: audit: type=1006 audit(1768398127.783:789): pid=5237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 13:42:07.804771 kernel: audit: type=1300 audit(1768398127.783:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaae7cc70 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:07.783000 audit[5237]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaae7cc70 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:07.783000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:07.815072 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:42:07.818661 kernel: audit: type=1327 audit(1768398127.783:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:07.819000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.822000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.839508 kernel: audit: type=1105 audit(1768398127.819:790): pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.839614 kernel: audit: type=1103 audit(1768398127.822:791): pid=5241 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.904493 sshd[5241]: Connection closed by 10.0.0.1 port 49028 Jan 14 13:42:07.904924 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:07.905000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.909810 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:49028.service: Deactivated successfully. Jan 14 13:42:07.912013 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:42:07.913187 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:42:07.914551 systemd-logind[1577]: Removed session 18. Jan 14 13:42:07.906000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.928132 kernel: audit: type=1106 audit(1768398127.905:792): pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.928187 kernel: audit: type=1104 audit(1768398127.906:793): pid=5237 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:07.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.106:22-10.0.0.1:49028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:09.249865 kubelet[2830]: E0114 13:42:09.249786 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:10.249657 kubelet[2830]: E0114 13:42:10.249563 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:42:12.922920 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). Jan 14 13:42:12.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:60860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:12.925049 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:12.925193 kernel: audit: type=1130 audit(1768398132.922:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:60860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:13.021000 audit[5256]: USER_ACCT pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.022318 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:13.025622 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:13.022000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.034284 systemd-logind[1577]: New session 19 of user core. Jan 14 13:42:13.044052 kernel: audit: type=1101 audit(1768398133.021:796): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.044100 kernel: audit: type=1103 audit(1768398133.022:797): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.044135 kernel: audit: type=1006 audit(1768398133.023:798): pid=5256 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 13:42:13.023000 audit[5256]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6313d70 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:13.061273 kernel: audit: type=1300 audit(1768398133.023:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6313d70 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:13.061443 kernel: audit: type=1327 audit(1768398133.023:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:13.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:13.066164 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:42:13.069000 audit[5256]: USER_START pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.072000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.091131 kernel: audit: type=1105 audit(1768398133.069:799): pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.091215 kernel: audit: type=1103 audit(1768398133.072:800): pid=5260 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.178355 sshd[5260]: Connection closed by 10.0.0.1 port 60860 Jan 14 13:42:13.178382 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:13.179000 audit[5256]: USER_END pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.184218 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:60860.service: Deactivated successfully. Jan 14 13:42:13.187103 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:42:13.190485 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:42:13.180000 audit[5256]: CRED_DISP pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.192422 systemd-logind[1577]: Removed session 19. Jan 14 13:42:13.201409 kernel: audit: type=1106 audit(1768398133.179:801): pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.201507 kernel: audit: type=1104 audit(1768398133.180:802): pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:13.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.106:22-10.0.0.1:60860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:13.251177 kubelet[2830]: E0114 13:42:13.251103 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:42:14.250490 kubelet[2830]: E0114 13:42:14.250400 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:42:15.251234 kubelet[2830]: E0114 13:42:15.251095 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:42:17.250564 kubelet[2830]: E0114 13:42:17.250493 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:42:18.192553 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:60868.service - OpenSSH per-connection server daemon (10.0.0.1:60868). Jan 14 13:42:18.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:18.194894 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:18.195050 kernel: audit: type=1130 audit(1768398138.192:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:18.303000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.304870 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 60868 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:18.308170 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:18.305000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.316253 systemd-logind[1577]: New session 20 of user core. Jan 14 13:42:18.323937 kernel: audit: type=1101 audit(1768398138.303:805): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.324031 kernel: audit: type=1103 audit(1768398138.305:806): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.324056 kernel: audit: type=1006 audit(1768398138.305:807): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 13:42:18.305000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd4192540 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:18.342224 kernel: audit: type=1300 audit(1768398138.305:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd4192540 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:18.342339 kernel: audit: type=1327 audit(1768398138.305:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:18.305000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:18.349036 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:42:18.352000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.354000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.373211 kernel: audit: type=1105 audit(1768398138.352:808): pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.373311 kernel: audit: type=1103 audit(1768398138.354:809): pid=5303 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.458604 sshd[5303]: Connection closed by 10.0.0.1 port 60868 Jan 14 13:42:18.458951 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:18.459000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.463515 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:60868.service: Deactivated successfully. Jan 14 13:42:18.465971 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:42:18.467193 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:42:18.468856 systemd-logind[1577]: Removed session 20. Jan 14 13:42:18.459000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.476749 kernel: audit: type=1106 audit(1768398138.459:810): pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.476801 kernel: audit: type=1104 audit(1768398138.459:811): pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:18.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.106:22-10.0.0.1:60868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:21.256791 kubelet[2830]: E0114 13:42:21.256488 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:42:23.478585 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:35510.service - OpenSSH per-connection server daemon (10.0.0.1:35510). Jan 14 13:42:23.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.106:22-10.0.0.1:35510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:23.481867 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:23.482061 kernel: audit: type=1130 audit(1768398143.478:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.106:22-10.0.0.1:35510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:23.566000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.567912 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 35510 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:23.571503 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:23.568000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.578865 systemd-logind[1577]: New session 21 of user core. Jan 14 13:42:23.588237 kernel: audit: type=1101 audit(1768398143.566:814): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.588311 kernel: audit: type=1103 audit(1768398143.568:815): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.588335 kernel: audit: type=1006 audit(1768398143.569:816): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 13:42:23.593400 kernel: audit: type=1300 audit(1768398143.569:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe57325800 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:23.569000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe57325800 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:23.605026 kernel: audit: type=1327 audit(1768398143.569:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:23.569000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:23.615137 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:42:23.619000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.622000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.656938 kernel: audit: type=1105 audit(1768398143.619:817): pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.657073 kernel: audit: type=1103 audit(1768398143.622:818): pid=5320 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.757436 sshd[5320]: Connection closed by 10.0.0.1 port 35510 Jan 14 13:42:23.757507 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:23.759000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.765610 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:35510.service: Deactivated successfully. Jan 14 13:42:23.770107 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:42:23.772359 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:42:23.759000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.775555 systemd-logind[1577]: Removed session 21. Jan 14 13:42:23.780434 kernel: audit: type=1106 audit(1768398143.759:819): pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.780500 kernel: audit: type=1104 audit(1768398143.759:820): pid=5316 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:23.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.106:22-10.0.0.1:35510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:24.250744 kubelet[2830]: E0114 13:42:24.250600 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:42:26.255414 kubelet[2830]: E0114 13:42:26.255311 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:42:27.253374 kubelet[2830]: E0114 13:42:27.253225 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:42:28.783507 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:35522.service - OpenSSH per-connection server daemon (10.0.0.1:35522). Jan 14 13:42:28.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.106:22-10.0.0.1:35522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:28.786446 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:28.786556 kernel: audit: type=1130 audit(1768398148.783:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.106:22-10.0.0.1:35522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:28.883000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.884559 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 35522 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:28.888095 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:28.885000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.895931 systemd-logind[1577]: New session 22 of user core. Jan 14 13:42:28.907100 kernel: audit: type=1101 audit(1768398148.883:823): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.907250 kernel: audit: type=1103 audit(1768398148.885:824): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.907278 kernel: audit: type=1006 audit(1768398148.885:825): pid=5333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 13:42:28.885000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2871db70 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:28.914984 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:42:28.926140 kernel: audit: type=1300 audit(1768398148.885:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2871db70 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:28.885000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:28.931099 kernel: audit: type=1327 audit(1768398148.885:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:28.931207 kernel: audit: type=1105 audit(1768398148.918:826): pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.918000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.921000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:28.976570 kernel: audit: type=1103 audit(1768398148.921:827): pid=5337 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:30.128755 kubelet[2830]: E0114 13:42:30.113164 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:42:30.795491 kubelet[2830]: E0114 13:42:30.795442 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:30.809020 kubelet[2830]: E0114 13:42:30.808866 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:42:30.920173 sshd[5337]: Connection closed by 10.0.0.1 port 35522 Jan 14 13:42:30.916442 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:30.923000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:30.923000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:31.068456 kernel: audit: type=1106 audit(1768398150.923:828): pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:31.069044 kernel: audit: type=1104 audit(1768398150.923:829): pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:31.087524 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:35522.service: Deactivated successfully. Jan 14 13:42:31.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.106:22-10.0.0.1:35522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:31.092661 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:42:31.096766 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:42:31.100201 systemd-logind[1577]: Removed session 22. Jan 14 13:42:34.250398 kubelet[2830]: E0114 13:42:34.250012 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:42:35.941800 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:35.941972 kernel: audit: type=1130 audit(1768398155.931:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:35.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:35.931914 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:53276.service - OpenSSH per-connection server daemon (10.0.0.1:53276). Jan 14 13:42:36.024000 audit[5353]: USER_ACCT pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.025191 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 53276 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:36.029119 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:36.025000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.038249 systemd-logind[1577]: New session 23 of user core. Jan 14 13:42:36.043313 kernel: audit: type=1101 audit(1768398156.024:832): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.043403 kernel: audit: type=1103 audit(1768398156.025:833): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.043434 kernel: audit: type=1006 audit(1768398156.025:834): pid=5353 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 13:42:36.025000 audit[5353]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8bdf430 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:36.057073 kernel: audit: type=1300 audit(1768398156.025:834): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea8bdf430 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:36.057111 kernel: audit: type=1327 audit(1768398156.025:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:36.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:36.073053 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:42:36.076000 audit[5353]: USER_START pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.087789 kernel: audit: type=1105 audit(1768398156.076:835): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.088000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.098926 kernel: audit: type=1103 audit(1768398156.088:836): pid=5357 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.202131 sshd[5357]: Connection closed by 10.0.0.1 port 53276 Jan 14 13:42:36.202523 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:36.203000 audit[5353]: USER_END pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.204000 audit[5353]: CRED_DISP pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.227953 kernel: audit: type=1106 audit(1768398156.203:837): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.228096 kernel: audit: type=1104 audit(1768398156.204:838): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.243843 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:53276.service: Deactivated successfully. Jan 14 13:42:36.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.106:22-10.0.0.1:53276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:36.246395 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:42:36.247915 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:42:36.251067 systemd-logind[1577]: Removed session 23. Jan 14 13:42:36.253310 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:53292.service - OpenSSH per-connection server daemon (10.0.0.1:53292). Jan 14 13:42:36.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.106:22-10.0.0.1:53292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:36.348000 audit[5370]: USER_ACCT pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.349822 sshd[5370]: Accepted publickey for core from 10.0.0.1 port 53292 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:36.350000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.350000 audit[5370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdea831360 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:36.350000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:36.352554 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:36.360652 systemd-logind[1577]: New session 24 of user core. Jan 14 13:42:36.374165 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:42:36.377000 audit[5370]: USER_START pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.380000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.682889 sshd[5374]: Connection closed by 10.0.0.1 port 53292 Jan 14 13:42:36.683330 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:36.685000 audit[5370]: USER_END pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.685000 audit[5370]: CRED_DISP pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.694148 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:53292.service: Deactivated successfully. Jan 14 13:42:36.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.106:22-10.0.0.1:53292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:36.696956 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:42:36.698466 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:42:36.702832 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:53294.service - OpenSSH per-connection server daemon (10.0.0.1:53294). Jan 14 13:42:36.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.106:22-10.0.0.1:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:36.704093 systemd-logind[1577]: Removed session 24. Jan 14 13:42:36.797000 audit[5386]: USER_ACCT pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.798216 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 53294 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:36.798000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.798000 audit[5386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1280a550 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:36.798000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:36.800604 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:36.807265 systemd-logind[1577]: New session 25 of user core. Jan 14 13:42:36.818959 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:42:36.822000 audit[5386]: USER_START pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:36.824000 audit[5390]: CRED_ACQ pid=5390 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.254334 containerd[1601]: time="2026-01-14T13:42:37.254170639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:42:37.336976 containerd[1601]: time="2026-01-14T13:42:37.336817381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:37.338875 containerd[1601]: time="2026-01-14T13:42:37.338507281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:42:37.338994 containerd[1601]: time="2026-01-14T13:42:37.338557415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:37.340032 kubelet[2830]: E0114 13:42:37.339972 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:42:37.340032 kubelet[2830]: E0114 13:42:37.340046 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:42:37.340965 kubelet[2830]: E0114 13:42:37.340239 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdt9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5cd75cc4bb-k94tm_calico-system(4c4e216b-6b87-484b-9cae-6f0965d6396b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:37.341901 kubelet[2830]: E0114 13:42:37.341844 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:42:37.554000 audit[5404]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:37.554000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc606090a0 a2=0 a3=7ffc6060908c items=0 ppid=2969 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:37.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:37.558160 sshd[5390]: Connection closed by 10.0.0.1 port 53294 Jan 14 13:42:37.558576 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:37.562000 audit[5404]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:37.562000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc606090a0 a2=0 a3=0 items=0 ppid=2969 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:37.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:37.563000 audit[5386]: USER_END pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.564000 audit[5386]: CRED_DISP pid=5386 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.573380 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:53294.service: Deactivated successfully. Jan 14 13:42:37.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.106:22-10.0.0.1:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:37.578390 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:42:37.580998 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:42:37.580000 audit[5407]: NETFILTER_CFG table=filter:136 family=2 entries=38 op=nft_register_rule pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:37.580000 audit[5407]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffdc4238f0 a2=0 a3=7fffdc4238dc items=0 ppid=2969 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:37.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:37.587979 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:53306.service - OpenSSH per-connection server daemon (10.0.0.1:53306). Jan 14 13:42:37.587000 audit[5407]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:37.587000 audit[5407]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffdc4238f0 a2=0 a3=0 items=0 ppid=2969 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:37.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:37.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.106:22-10.0.0.1:53306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:37.592438 systemd-logind[1577]: Removed session 25. Jan 14 13:42:37.682000 audit[5411]: USER_ACCT pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.683884 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 53306 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:37.684000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.684000 audit[5411]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff60da8b40 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:37.684000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:37.686926 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:37.693918 systemd-logind[1577]: New session 26 of user core. Jan 14 13:42:37.702997 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:42:37.706000 audit[5411]: USER_START pid=5411 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.709000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.947830 sshd[5415]: Connection closed by 10.0.0.1 port 53306 Jan 14 13:42:37.948945 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:37.951000 audit[5411]: USER_END pid=5411 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.951000 audit[5411]: CRED_DISP pid=5411 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:37.959101 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:53306.service: Deactivated successfully. Jan 14 13:42:37.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.106:22-10.0.0.1:53306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:37.964637 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:42:37.968181 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:42:37.971102 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:53318.service - OpenSSH per-connection server daemon (10.0.0.1:53318). Jan 14 13:42:37.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.106:22-10.0.0.1:53318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:37.972570 systemd-logind[1577]: Removed session 26. Jan 14 13:42:38.053000 audit[5428]: USER_ACCT pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.054260 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 53318 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:38.055000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.055000 audit[5428]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7839ad00 a2=3 a3=0 items=0 ppid=1 pid=5428 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:38.055000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:38.057258 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:38.064603 systemd-logind[1577]: New session 27 of user core. Jan 14 13:42:38.076067 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:42:38.079000 audit[5428]: USER_START pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.081000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.173550 sshd[5432]: Connection closed by 10.0.0.1 port 53318 Jan 14 13:42:38.174364 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:38.178000 audit[5428]: USER_END pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.178000 audit[5428]: CRED_DISP pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:38.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.106:22-10.0.0.1:53318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:38.182003 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:53318.service: Deactivated successfully. Jan 14 13:42:38.186560 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:42:38.189800 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:42:38.191350 systemd-logind[1577]: Removed session 27. Jan 14 13:42:38.250632 kubelet[2830]: E0114 13:42:38.250442 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:42:38.252601 kubelet[2830]: E0114 13:42:38.252158 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:42:43.209344 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 13:42:43.209547 kernel: audit: type=1130 audit(1768398163.195:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:40228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:43.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:40228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:43.196474 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:40228.service - OpenSSH per-connection server daemon (10.0.0.1:40228). Jan 14 13:42:43.257906 containerd[1601]: time="2026-01-14T13:42:43.257655563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:42:43.338504 containerd[1601]: time="2026-01-14T13:42:43.338328623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:43.341222 containerd[1601]: time="2026-01-14T13:42:43.340949202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:42:43.341222 containerd[1601]: time="2026-01-14T13:42:43.341113058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:43.342196 kubelet[2830]: E0114 13:42:43.342004 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:42:43.342196 kubelet[2830]: E0114 13:42:43.342118 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:42:43.343368 kubelet[2830]: E0114 13:42:43.343161 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e68290aaef264f96a3b305ab225972cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:43.346113 containerd[1601]: time="2026-01-14T13:42:43.346043983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:42:43.361000 audit[5453]: USER_ACCT pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.365871 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:43.374263 kernel: audit: type=1101 audit(1768398163.361:881): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.374302 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 40228 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:43.361000 audit[5453]: CRED_ACQ pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.381898 systemd-logind[1577]: New session 28 of user core. Jan 14 13:42:43.385799 kernel: audit: type=1103 audit(1768398163.361:882): pid=5453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.385965 kernel: audit: type=1006 audit(1768398163.361:883): pid=5453 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 13:42:43.361000 audit[5453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3c430400 a2=3 a3=0 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:43.404111 kernel: audit: type=1300 audit(1768398163.361:883): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3c430400 a2=3 a3=0 items=0 ppid=1 pid=5453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:43.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:43.408332 kernel: audit: type=1327 audit(1768398163.361:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:43.409126 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:42:43.414000 audit[5453]: USER_START pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.429535 kubelet[2830]: E0114 13:42:43.422363 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:42:43.429535 kubelet[2830]: E0114 13:42:43.422439 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:42:43.429535 kubelet[2830]: E0114 13:42:43.422585 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmtm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66cc84c7bd-ghp9r_calico-system(0591f86b-0203-41aa-ad8c-78b05a83945e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:43.429535 kubelet[2830]: E0114 13:42:43.424822 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:42:43.430081 kernel: audit: type=1105 audit(1768398163.414:884): pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.430138 containerd[1601]: time="2026-01-14T13:42:43.419545031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:43.430138 containerd[1601]: time="2026-01-14T13:42:43.421545923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:42:43.430138 containerd[1601]: time="2026-01-14T13:42:43.421949366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:43.414000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.453782 kernel: audit: type=1103 audit(1768398163.414:885): pid=5457 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.596815 sshd[5457]: Connection closed by 10.0.0.1 port 40228 Jan 14 13:42:43.598091 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:43.599000 audit[5453]: USER_END pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.606363 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:40228.service: Deactivated successfully. Jan 14 13:42:43.609403 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:42:43.611061 systemd-logind[1577]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:42:43.599000 audit[5453]: CRED_DISP pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.614389 systemd-logind[1577]: Removed session 28. Jan 14 13:42:43.620574 kernel: audit: type=1106 audit(1768398163.599:886): pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.620904 kernel: audit: type=1104 audit(1768398163.599:887): pid=5453 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:43.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.106:22-10.0.0.1:40228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:44.210000 audit[5470]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:44.210000 audit[5470]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc1e623e0 a2=0 a3=7ffdc1e623cc items=0 ppid=2969 pid=5470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:44.210000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:44.221000 audit[5470]: NETFILTER_CFG table=nat:139 family=2 entries=104 op=nft_register_chain pid=5470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:42:44.221000 audit[5470]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdc1e623e0 a2=0 a3=7ffdc1e623cc items=0 ppid=2969 pid=5470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:44.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:42:44.249155 kubelet[2830]: E0114 13:42:44.249110 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:45.251626 containerd[1601]: time="2026-01-14T13:42:45.251504455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:42:45.323912 containerd[1601]: time="2026-01-14T13:42:45.323622032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:45.325638 containerd[1601]: time="2026-01-14T13:42:45.325543790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:42:45.325802 containerd[1601]: time="2026-01-14T13:42:45.325775483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:45.326070 kubelet[2830]: E0114 13:42:45.325966 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:42:45.326070 kubelet[2830]: E0114 13:42:45.326062 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:42:45.326982 kubelet[2830]: E0114 13:42:45.326370 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62dg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26r2w_calico-system(d3d1f008-c373-460c-bb65-2604d6d39838): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:45.328413 kubelet[2830]: E0114 13:42:45.328360 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:42:47.249505 kubelet[2830]: E0114 13:42:47.249355 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:42:48.614973 systemd[1]: Started sshd@27-10.0.0.106:22-10.0.0.1:40238.service - OpenSSH per-connection server daemon (10.0.0.1:40238). Jan 14 13:42:48.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:40238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:48.618843 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 13:42:48.618943 kernel: audit: type=1130 audit(1768398168.614:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:40238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:48.707000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.708398 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 40238 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:48.711196 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:48.708000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.721894 systemd-logind[1577]: New session 29 of user core. Jan 14 13:42:48.730455 kernel: audit: type=1101 audit(1768398168.707:892): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.730586 kernel: audit: type=1103 audit(1768398168.708:893): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.730634 kernel: audit: type=1006 audit(1768398168.709:894): pid=5498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 13:42:48.709000 audit[5498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c6d6100 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:48.746459 kernel: audit: type=1300 audit(1768398168.709:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c6d6100 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:48.746574 kernel: audit: type=1327 audit(1768398168.709:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:48.709000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:48.753117 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:42:48.758000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.763000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.790257 kernel: audit: type=1105 audit(1768398168.758:895): pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.790819 kernel: audit: type=1103 audit(1768398168.763:896): pid=5502 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.886843 sshd[5502]: Connection closed by 10.0.0.1 port 40238 Jan 14 13:42:48.887299 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:48.888000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.893336 systemd[1]: sshd@27-10.0.0.106:22-10.0.0.1:40238.service: Deactivated successfully. Jan 14 13:42:48.896588 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:42:48.898834 systemd-logind[1577]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:42:48.901264 systemd-logind[1577]: Removed session 29. Jan 14 13:42:48.916546 kernel: audit: type=1106 audit(1768398168.888:897): pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.916800 kernel: audit: type=1104 audit(1768398168.889:898): pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.889000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:48.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.106:22-10.0.0.1:40238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:49.252007 containerd[1601]: time="2026-01-14T13:42:49.250789198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:42:49.318013 containerd[1601]: time="2026-01-14T13:42:49.317867071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:49.319475 containerd[1601]: time="2026-01-14T13:42:49.319364392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:42:49.319475 containerd[1601]: time="2026-01-14T13:42:49.319430090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:49.319876 kubelet[2830]: E0114 13:42:49.319815 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:49.319876 kubelet[2830]: E0114 13:42:49.319877 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:49.320513 kubelet[2830]: E0114 13:42:49.319996 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrk54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-7h7cg_calico-apiserver(8d93874e-aa0f-4caa-9b42-ab659ab91c41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:49.321536 kubelet[2830]: E0114 13:42:49.321480 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-7h7cg" podUID="8d93874e-aa0f-4caa-9b42-ab659ab91c41" Jan 14 13:42:53.249996 kubelet[2830]: E0114 13:42:53.249909 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5cd75cc4bb-k94tm" podUID="4c4e216b-6b87-484b-9cae-6f0965d6396b" Jan 14 13:42:53.251272 containerd[1601]: time="2026-01-14T13:42:53.251135886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:42:53.320578 containerd[1601]: time="2026-01-14T13:42:53.320362824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:53.322206 containerd[1601]: time="2026-01-14T13:42:53.322081020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:42:53.322206 containerd[1601]: time="2026-01-14T13:42:53.322172201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:53.322543 kubelet[2830]: E0114 13:42:53.322421 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:42:53.322543 kubelet[2830]: E0114 13:42:53.322485 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:42:53.322978 kubelet[2830]: E0114 13:42:53.322922 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:53.323430 containerd[1601]: time="2026-01-14T13:42:53.323336666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:42:53.392326 containerd[1601]: time="2026-01-14T13:42:53.392251982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:53.394866 containerd[1601]: time="2026-01-14T13:42:53.394561022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:42:53.394866 containerd[1601]: time="2026-01-14T13:42:53.394618854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:53.395361 kubelet[2830]: E0114 13:42:53.395276 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:53.395489 kubelet[2830]: E0114 13:42:53.395372 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:42:53.395974 kubelet[2830]: E0114 13:42:53.395815 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d8b4854-wgpjs_calico-apiserver(cf16b57f-d6ab-45f8-acf0-0156a11bd169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:53.398019 kubelet[2830]: E0114 13:42:53.397965 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d8b4854-wgpjs" podUID="cf16b57f-d6ab-45f8-acf0-0156a11bd169" Jan 14 13:42:53.398140 containerd[1601]: time="2026-01-14T13:42:53.397627312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:42:53.478993 containerd[1601]: time="2026-01-14T13:42:53.478930650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:42:53.480862 containerd[1601]: time="2026-01-14T13:42:53.480807174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:42:53.480924 containerd[1601]: time="2026-01-14T13:42:53.480832515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:42:53.481362 kubelet[2830]: E0114 13:42:53.481288 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:42:53.481362 kubelet[2830]: E0114 13:42:53.481343 2830 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:42:53.482361 kubelet[2830]: E0114 13:42:53.482072 2830 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jh72h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5l6pz_calico-system(70f39dd7-0818-4b8e-a6d8-f99942268a1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:42:53.483623 kubelet[2830]: E0114 13:42:53.483549 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5l6pz" podUID="70f39dd7-0818-4b8e-a6d8-f99942268a1b" Jan 14 13:42:53.913799 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:53.913985 kernel: audit: type=1130 audit(1768398173.906:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.106:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:53.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.106:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:53.906982 systemd[1]: Started sshd@28-10.0.0.106:22-10.0.0.1:45452.service - OpenSSH per-connection server daemon (10.0.0.1:45452). Jan 14 13:42:53.977000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:53.978968 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 45452 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:53.982202 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:53.979000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:53.991106 systemd-logind[1577]: New session 30 of user core. Jan 14 13:42:53.996307 kernel: audit: type=1101 audit(1768398173.977:901): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:53.996357 kernel: audit: type=1103 audit(1768398173.979:902): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:53.999021 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 13:42:54.002143 kernel: audit: type=1006 audit(1768398173.979:903): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 13:42:54.010639 kernel: audit: type=1300 audit(1768398173.979:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc02f22ef0 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:53.979000 audit[5523]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc02f22ef0 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:53.979000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:54.025117 kernel: audit: type=1327 audit(1768398173.979:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:54.025221 kernel: audit: type=1105 audit(1768398174.007:904): pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.007000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.011000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.037660 kernel: audit: type=1103 audit(1768398174.011:905): pid=5527 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.123786 sshd[5527]: Connection closed by 10.0.0.1 port 45452 Jan 14 13:42:54.123773 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:54.125000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.146005 kernel: audit: type=1106 audit(1768398174.125:906): pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.125000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.148821 systemd[1]: sshd@28-10.0.0.106:22-10.0.0.1:45452.service: Deactivated successfully. Jan 14 13:42:54.152114 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 13:42:54.154293 systemd-logind[1577]: Session 30 logged out. Waiting for processes to exit. Jan 14 13:42:54.156278 systemd-logind[1577]: Removed session 30. Jan 14 13:42:54.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.106:22-10.0.0.1:45452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:54.157724 kernel: audit: type=1104 audit(1768398174.125:907): pid=5523 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:54.255157 kubelet[2830]: E0114 13:42:54.255008 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66cc84c7bd-ghp9r" podUID="0591f86b-0203-41aa-ad8c-78b05a83945e" Jan 14 13:42:58.314073 kubelet[2830]: E0114 13:42:58.313108 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26r2w" podUID="d3d1f008-c373-460c-bb65-2604d6d39838" Jan 14 13:42:59.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.106:22-10.0.0.1:45468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:59.139031 systemd[1]: Started sshd@29-10.0.0.106:22-10.0.0.1:45468.service - OpenSSH per-connection server daemon (10.0.0.1:45468). Jan 14 13:42:59.141144 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:42:59.141194 kernel: audit: type=1130 audit(1768398179.138:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.106:22-10.0.0.1:45468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:59.220000 audit[5555]: USER_ACCT pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.221096 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 45468 ssh2: RSA SHA256:0NpdvfrJOwASkiZktVZJjkqIvK6x8CCfEKdy9IHGxX8 Jan 14 13:42:59.224043 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:42:59.221000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.237493 kernel: audit: type=1101 audit(1768398179.220:910): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.237566 kernel: audit: type=1103 audit(1768398179.221:911): pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.237982 kernel: audit: type=1006 audit(1768398179.221:912): pid=5555 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 13:42:59.241180 systemd-logind[1577]: New session 31 of user core. Jan 14 13:42:59.221000 audit[5555]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc80e248d0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:59.252476 kernel: audit: type=1300 audit(1768398179.221:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc80e248d0 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:42:59.221000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:59.256737 kernel: audit: type=1327 audit(1768398179.221:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:42:59.258053 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 13:42:59.262000 audit[5555]: USER_START pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.265000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.281434 kernel: audit: type=1105 audit(1768398179.262:913): pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.281509 kernel: audit: type=1103 audit(1768398179.265:914): pid=5560 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.349642 sshd[5560]: Connection closed by 10.0.0.1 port 45468 Jan 14 13:42:59.352144 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jan 14 13:42:59.353000 audit[5555]: USER_END pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.360287 systemd[1]: sshd@29-10.0.0.106:22-10.0.0.1:45468.service: Deactivated successfully. Jan 14 13:42:59.363207 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 13:42:59.364741 kernel: audit: type=1106 audit(1768398179.353:915): pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.353000 audit[5555]: CRED_DISP pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:42:59.366087 systemd-logind[1577]: Session 31 logged out. Waiting for processes to exit. Jan 14 13:42:59.367361 systemd-logind[1577]: Removed session 31. Jan 14 13:42:59.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.106:22-10.0.0.1:45468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:42:59.372729 kernel: audit: type=1104 audit(1768398179.353:916): pid=5555 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:43:00.252149 kubelet[2830]: E0114 13:43:00.251943 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"